Re: Raw devices broken in 2.6.1? AND- 2.6.1 I/O degraded?

From: Nick Piggin
Date: Fri Jan 30 2004 - 02:21:07 EST




Andrew Morton wrote:

"Curt" <curt@xxxxxxxxxxxx> wrote:

> Or you can put 2.6 on par by setting
> /proc/sys/vm/dirty_background_ratio to 40 and dirty_ratio to 60.

Okay will do, is there a good comprehensive resource where I can read up on
these (and presumably many other I/O related) variables?


We've been relatively good about keeping the in-kernel documentation up to
date. For this stuff, see Documentation/filesystems/proc.txt and
Documentation/sysctl/vm.txt.


> Longer-term, if your customers are using scsi, you should ensure that the
> disks do not use a tag queue depth of more than 4 or 8. More than that
and
> the anticipatory scheduler becomes ineffective and you won't get that
> multithreaded-read goodness.

I've heard-tell of tweaking the elevator paramter to 'deadline', again could
you point me to a resource where I can read up on this? And forgive the
newbie-question, but is this a boot-time parameter, or a bit I can set in
the /proc system, or both?


It's boot-time only. We were working on making it per-disk but that was
quite complex and we really didn't get there in time.

So add `elevator=deadline' to your kernel boot command line. From my
(brief) testing, it was a significant lose. It needs more work though:
2.6+deadline shouldn't be slower than 2.4.x



Another thing you can do which is runtime per-disk is set
/sys/block/???/queue/iosched/antic_expire to 0 which gives you
something quite like deadline.

> Please stay in touch, btw. If we cannot get applications such as yours
> working well, we've wasted our time...

I'll do what I can to provide real-world feedback, I want this to work too.


Thanks.


I'd be interested in taking a look at the io scheduler if you have
problems with these workloads in future.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/