Re: Things I wish I'd known about Inotify

From: Eric Paris
Date: Fri Apr 04 2014 - 11:48:48 EST


On Fri, 2014-04-04 at 15:00 +0200, David Herrmann wrote:

> 1)
> IN_IGNORED is async and _immediate_ in case a file got deleted. So if
> you use watch-descriptors as keys for your objects, an _already_ used
> key might be returned by inotify_add_watch() if an IN_IGNORED is
> queued for the old watch (which implicitly destroys the watch). Once
> you read the IN_IGNORED from the queue, there is usually no way to
> know whether it's generated by the old watch or by the new. The
> man-page mentions this in:
> "IN_IGNORED: Watch was removed explicitly (inotify_rm_watch(2)) or
> automatically (file was deleted, or filesystem was unmounted)."
> I think we should add a note to BUGS that mentions this race (which is
> really not obvious from the description).
>
> This race could be fixed by requiring an explicit inotify_rm_watch()
> if an IN_IGNORED was generated asynchronously.

For a brief while after the introduction of fsnotify this was a problem,
but not before then, or on anything remotely recent (like 4-5 years?).
We didn't re-use watch descriptors at all, so if you get a notification
after the IGNORED, its still the old one. Today it's possible to wrap
around at INT_MAX and reuse, but that is a tee tiny issue...

----

Note that both of these races rely on watch-descriptors being reused
after they were freed. Turns out, that was "fixed" about exactly 1
year ago in:

commit a66c04b4534f9b25e1241dff9a9d94dff9fd66f8
Author: Jeff Layton <jlayton@xxxxxxxxxx>
Date: Mon Apr 29 16:21:21 2013 -0700

inotify: convert inotify_add_to_idr() to use idr_alloc_cyclic()

So in case that was never backported, only older kernels are affected.
In newer kernels, wd reuse is quite unlikely. The races are still
there, though.

----

Actually that has nothing to do with it. If anything, it reintroduces
the reuse since now it wraps instead of fails...

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/