Re: Errors in the VM - detailed

From: Andrew Morton (akpm@zip.com.au)
Date: Thu Jan 31 2002 - 15:43:08 EST


Jens Axboe wrote:
>
> That will trigger _all the time_ even on a moderately busy machine.
> Checking if tossing away read-ahead is the issue is probably better
> tested with just increasing the request slots. Roy, please try and change
> the queue_nr_requests assignment in ll_rw_blk:blk_dev_init() to
> something like 2048.
>

heh. Yep, Roger finally nailed it, I think.

Roy says the bug was fixed in rmap11c. Changelog says:

rmap 11c:
  ...
  - elevator improvement (Andrew Morton)

Which includes:

- queue_nr_requests = 64;
- if (total_ram > MB(32))
- queue_nr_requests = 128; + queue_nr_requests = (total_ram >> 9) & ~15; /* One per half-megabyte */
+ if (queue_nr_requests < 32)
+ queue_nr_requests = 32;
+ if (queue_nr_requests > 1024)
+ queue_nr_requests = 1024;

So Roy is running with 1024 requests.

The question is (sorry, Roy): does this need fixing?

The only thing which can trigger it is when we have
zillions of threads doing reads (or zillions of outstanding
aio read requests) or when there are a large number of
unmerged write requests in the elevator. It's a rare
case.

If we _do_ need a fix, then perhaps we should just stop
using READA in the readhead code? readahead is absolutely
vital to throughput, and best-effort request allocation
just isn't good enough.

Thoughts?

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Jan 31 2002 - 21:01:38 EST