Re: Clearing the I/O caches? (for benchmark tests)

Tom M. Kroeger (tmk@cse.ucsc.edu)
14 Jul 1999 10:57:16 -0700


Larry -- Thanks for the reply and code.

Larry McVoy <lm@bitmover.com> writes:
> This is part of lmbench and seems to work somewhat.
> [SNIP]

Unfortunately this ioctl wasn't able to clear the buffer cache to the
same level as after a reboot. The table below shows my results, I'm
using the patch from the Linux Scalability Project
(http://www.citi.umich.edu/projects/linux-scalability/) to instrument
the hashes to get my lookup and hit ratios. Each row is a run of the
build test. Before each run I call prune_dcahe(0), then a routine that
goes through the dcache clearing all the pages of every inode, the I
call free_inode_memory(20), and then the flushdisk code from lmbench.
It still doesn't seem to clear the caches as well as a simple program
that exhausts all memory, but this causes many things to die.

Using flushdisk
Wall Page cache Buff cache Dentry ch # num
# clk lkups ht% lkups ht% lkups ht% # ios blks pgs csw forks
1 44.1 55258( 68) 81439( 74) 52683( 97) # 580 1160 5913 1726 183
^^
2 43.2 54587( 68) 74543( 80) 52683( 97) # 454 908 3531 1534 183
3 42.1 54539( 68) 74323( 80) 52683( 97) # 419 838 3383 1475 183
4 42.7 54539( 68) 74296( 80) 52683( 97) # 426 852 3391 1472 183
5 42.8 54539( 68) 74317( 80) 52683( 97) # 427 854 3392 1476 183

mallocing until memory's exhausted in between each run

Wall Page cache Buff cache Dentry ch # num
# clk lkups ht% lkups ht% lkups ht% # ios blks pgs csw forks
1 44.0 55898( 68) 81418( 74) 52612( 97) # 529 1124 6008 1771 183
2 45.2 55923( 68) 81118( 74) 52643( 97) # 517 1100 5974 1792 183
3 44.5 55880( 68) 81453( 75) 52284( 98) # 429 906 5869 1563 183
4 43.2 55881( 68) 81139( 75) 52638( 97) # 473 994 5912 1589 183
5 44.1 55992( 68) 82094( 74) 52645( 97) # 496 1076 6078 1680 183

I'd be content to use the memory hogging, but this has the unwanted
side effect of causing several things (klogd and syslogd and
occasionally init) to die.

Is there some other place that data is stored (that still need to
clear) that would account for the decrease in the number of buffer
cache lookups? Am I measuring something incorrectly? Is there some
other side effect?

Any other suggestions, would be quite appreciated.

Again thanks for your time and help,

-- 

tmk

----------------------------------------------------------------------- Tom M. Kroeger Pray for wind Graduate Student, UC Santa Cruz \ Pray for waves e-mail: tmk@cs.ucsc.edu |\ and Pray it's your day off! http://www.cse.ucsc.edu/~tmk |~\ (831) 459-4458 |__\ (831) 426-9055 home ,----+--

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/