Hello Eduardo.
This sounds exactly like a problem which we encountered some time
ago with our database system.
As a workaround, we use a patch from Scott Laird
<laird@pacificrim.net>
which changes ext2_sync_file() to fall back to file_fsync() when the
file size exceeds a a few MB. Due to the way ext2_sync_file() is
implemented it is deemed to cause trouble when performing an fsync()
on bigger files (which is required for databases).
Maybe something like this should be included in 2.0.37 until a more
efficient way is found to deal with fsync() on bigger files (eg. keep
a list of modified blocks per file).
Michael
-- Michael Marxmeier Marxmeier Software GmbH E-Mail: mike@msede.com Besenbruchstrasse 9 Voice : +49 202 2431440 42285 Wuppertal, Germany Fax : +49 202 2431420 http://www.msede.com/ --------------C9DA0DCC1A747BC6867718A2 Content-Type: text/plain; charset=us-ascii; name="fast-fsync-patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="fast-fsync-patch"--- linux-2.0.36/fs/ext2/fsync.c.orig Mon Feb 1 03:35:25 1999 +++ linux-2.0.36/fs/ext2/fsync.c Mon Feb 1 03:42:04 1999 @@ -10,6 +10,8 @@ * linux/fs/minix/truncate.c Copyright (C) 1991, 1992 Linus Torvalds * * ext2fs fsync primitive + * + * Fast 'fsync' on large files (Scott Laird <laird@pacificrim.net>) */ #include <asm/segment.h> @@ -172,6 +174,13 @@ * Don't sync fast links! */ goto skip; + + /* fsync on large files is *slow*, so fall back to sync() if + * the file's over 10M */ + if (inode->i_size>10000000) { + file_fsync(inode,file); + goto skip; + } for (wait=0; wait<=1; wait++) {
--------------C9DA0DCC1A747BC6867718A2--
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/