Re: i_size still not SMP safe.

Alexander Viro (
Wed, 4 Aug 1999 21:18:49 -0400 (EDT)

On Wed, 4 Aug 1999, Manfred Spraul wrote:

> I read that Linus wants to issue a feature freeze soon, so I want to remind
> the VFS developers that (AFAIK) i_size and file->f_pos are not yet SMP safe
> (I posted this a few weeks ago for 2.3.6)
> Description (everything for ext2):
> sys_write() calls generic_file_write() without acquiring the big kernel
> lock, and generic_file_write() uses f_pos and i_size without any locks.
> Note that O_APPEND is broken as well, and this might be the larger problem.

I'm moving the modification of f_pos and i_size inside of the per-page
lock in FAT patch. With FAT the things get addition trickyness, since we
don't have holes (arrgh...) It is not enough for complete solution,

> My only idea is quite ugly, so I hope someone has a better solution:
> 2 new fields in every inode: a linked list (contains all operations
> currently executing on the inode) and a wait queue (for all
> collisions).

No, thanks. First of all, i_size must be set by ->truncate()
which should be called from notify_change(). Then, we probably need a
sleeping variant of rwlock here - sys_write() being reader and
->truncate() - writer ;-) i_size should become atomic, but that's the last
problem I really care of - real races are not SMP-dependent here. I'll do
it after the FAT will become testable - I *hope* that I'll be able to post
initial variant of FAT patch in a couple of days. Darn RL stuff...

> Note that currently do_truncate() acquires inode->i_sem before calling
> i_op->truncate, but I don't see why. (sys_open(O_TRUNC) truncates
> a file, but it doesn't acquire the inode semaphore.)

Atavism. sys_write() used to grab it too.

> Btw, I think that the lock_kernel()-call in sys_lseek() is superflous:
> sys_lseek accesses the same fields as sys_write(), but sys_write()
> does not acquire the big kernel lock.

Atavism again.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at