truncate_inode_pages (2.3.17): inefficient ?

Martin Schenk (schenkm@eunet.at)
Thu, 09 Sep 1999 19:52:05 +0200


The way truncate_inode_pages (mm/filemap.c) is implemented in 2.3.17
could
lead to slow performance in the following example situation:
- the first 10000 pages in the page list have offset < start
- the next 10000 pages have offset>=start (and are going to be removed)
For each of the removed pages, the first 10000 entries have to be
rescanned
(leading to 100 million executions of the inner loop).
[in 2.2.*, only locked pages (with I/O going on) lead to "goto repeat"
and a rescan]

Is it really necessary to rescan after every removal, or would it be
enough
to set a flag "repeat_necessary" which leads to a "goto repeat" after
the loop ?

If this is not a safe way of doing things, why is calling
remove_inode_page safe ?

Sorry if I'm totally wrong, but I'm trying to understand.

Martin

-- 
Martin Schenk <schenkm@eunet.at>
Richard-Wagnergasse 46, A-8010 Graz, Austria

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/