Re: pdflush eating a lot of CPU on heavy NFS I/O

From: Brent Cook
Date: Wed Apr 28 2004 - 23:35:14 EST


On Wed, 28 Apr 2004, Andrew Morton wrote:

> Brent Cook <busterbcook@xxxxxxxxx> wrote:
> >
> > sync_sb_inodes: write inode c55d25bc
> > __sync_single_inode: writepages in nr_pages:25 nr_to_write:949
> > pages_skipped:0 en:0
> > __sync_single_inode: writepages in nr_pages:25 nr_to_write:949
> > pages_skipped:0 en:0
>
> uh-huh.
>
> Does this fix it?

I'm going to run a compile/load test overnight, but the test that
triggered it every time previously failed to do so with this patch.

pdflush is behaving so far, and I'll say you've figured it out for now,
with the final verdict in about 8 hours.

Does this mean that, if there were too many dirty pages and not enough
time to write them all back, that the dirty page list just stopped being
traversed, stuck on a single page? That would make explain why this was
easier to trigger on NFS than a local FS, due to the extra latency, etc.

Maybe I should look into tuning my NFS server while we're at it, as yours
must be much faster ;)

- Brent

> 25-akpm/fs/fs-writeback.c | 2 +-
> 1 files changed, 1 insertion(+), 1 deletion(-)
>
> diff -puN fs/fs-writeback.c~a fs/fs-writeback.c
> --- 25/fs/fs-writeback.c~a 2004-04-28 21:01:37.012603336 -0700
> +++ 25-akpm/fs/fs-writeback.c 2004-04-28 21:02:00.701002152 -0700
> @@ -191,8 +191,8 @@ __sync_single_inode(struct inode *inode,
> */
> inode->i_state |= I_DIRTY_PAGES;
> inode->dirtied_when = jiffies;
> - list_move(&inode->i_list, &sb->s_dirty);
> }
> + list_move(&inode->i_list, &sb->s_dirty);
> } else if (inode->i_state & I_DIRTY) {
> /*
> * Someone redirtied the inode while were writing back
>
> _
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/