Re: [PATCH 01/13] writeback: IO-less balance_dirty_pages()

From: Wu Fengguang
Date: Sun Dec 05 2010 - 11:14:49 EST


On Wed, Dec 01, 2010 at 09:38:18PM +0800, Wu Fengguang wrote:
> [restore CC list for new findings]
>
> On Wed, Dec 01, 2010 at 06:39:25AM +0800, Peter Zijlstra wrote:
> > On Tue, 2010-11-30 at 23:35 +0100, Peter Zijlstra wrote:
> > > On Tue, 2010-11-30 at 12:37 +0800, Wu Fengguang wrote:
> > > > On Tue, Nov 30, 2010 at 04:53:33AM +0800, Peter Zijlstra wrote:
> > > > > On Mon, 2010-11-29 at 23:17 +0800, Wu Fengguang wrote:
> > > > > > Hi Peter,
> > > > > >
> > > > > > I'm drawing funny graphs to track the writeback dynamics :)
> > > > > >
> > > > > > In the attached graphs, I find abnormals in dirty-pages-3000.png and
> > > > > > dirty-pages-200.png. The task limit is what's returned by
> > > > > > task_dirty_limit(), which should be very stable. However from the
> > > > > > graph it seems the task weight (numerator/denominator) will suddenly
> > > > > > drop to near 0 on every 9-10 seconds. Do you have immediate insight
> > > > > > on what's going on? If not, I'm going to do some tracing to track down
> > > > > > how the numbers change over time.
> > > > >
> > > > > No immediate thoughts there.. I need to look through the math again, but
> > > > > I'm kinda swamped atm. (and my primary dev machine had its disk die this
> > > > > morning). I'll try and get around to it soon..
> > > >
> > > > Peter, I did a simple debug patch (attached) and collected these
> > > > numbers. I noticed that at the "task_weight=27%" and "task_weight=14%"
> > > > lines, "period" increases, "num" is decreased while "den" is still
> > > > increasing.
> > > >
> > > > num=db2e den=e8c0 period=3f8000 shift=10
> > > > num=e04c den=ede0 period=3f8000 shift=10
> > > > num=e56a den=f300 period=3f8000 shift=10
> > >
> > > > num=3e78 den=e400 period=408000 shift=10
> > >
> > > > num=1341 den=8900 period=418000 shift=10
> > > > num=185f den=8e20 period=418000 shift=10
> > > > num=1d7d den=9340 period=418000 shift=10
> > > > num=229b den=9860 period=418000 shift=10
> > > > num=27b9 den=9da0 period=418000 shift=10
> > > > num=2cd7 den=a2c0 period=418000 shift=10
> > >
> > >
> > > This looks sane.. the period indicates someone else was dirtying lots of
> > > pages. Every time the period increases (its shifted right by shift) we
> > > divide the events (num) by 2.
> >
> > Its actually shifted left by shift-1.. see prop_norm_single(), which
> > would make the below:
> >
> > > So the increment from 3f8000 to 408000 is 4064 to 4128, or 64, that
> > > should reset events to 0, seeing that it didn't means it got incremented
> > > as well.
> > >
> > > Funny enough, the second jump is again exactly 64..
> > >
> > > Anyway, as you can see, den increases as long as period stays constant,
> > > it takes a dip when period increments.
> >
> > two steps of 128, which is terribly large.
> >
> > then again, a period of 512 pages is very very small.
>
> Peter, I also collected prop_norm_single() traces, hope it helps.
>
> Again, you can find time points when the task limit suddenly skip high
> in graphs "dirty-pages*.png", and then find the corresponding data
> point in file "trace". Sorry I compute something wrong: the "ratio"
> field in the trace data is always 0, please just ignore them.
>
> I noticed that jbd2/sda8-8-2811 dirtied lots of pages, perhaps by
> ext4_bio_write_page(). This should happen only on -ENOMEM. I also

Ah I seem to find the root cause. See the attached graphs. Ext4 should
be calling redirty_page_for_writepage() to redirty ~300MB pages on
every ~10s. The redirties happen in big bursts, so not surprisingly
the dd task's dirty weight will suddenly drop to 0.

It should be the same ext4 issue discussed here:

http://www.spinics.net/lists/linux-fsdevel/msg39555.html

Thanks,
Fengguang

Attachment: vmstat-written-300.png
Description: PNG image

Attachment: vmstat-written.png
Description: PNG image