Re: POSIX violation by writeback error

From: Jeff Layton
Date: Thu Sep 27 2018 - 08:43:16 EST


On Tue, 2018-09-25 at 18:30 -0400, Theodore Y. Ts'o wrote:
> On Tue, Sep 25, 2018 at 12:41:18PM -0400, Jeff Layton wrote:
> > That's all well and good, but still doesn't quite solve the main concern
> > with all of this. It's suppose we have this series of events:
> >
> > open file r/w
> > write 1024 bytes to offset 0
> > <background writeback that fails>
> > read 1024 bytes from offset 0
> >
> > Open, write and read are successful, and there was no fsync or close in
> > between them. Will that read reflect the result of the previous write or
> > no?
>
> If the background writeback hasn't happened, Posix requires that the
> read returns the result of the write. And the user doesn't know when
> or if the background writeback has happened unless the user calls
> fsync(2).
>
> Posix in general basically says anything is possible if the system
> fails or crashes, or is dropped into molten lava, etc. Do we say that
> Linux is not Posix compliant if a cosmic ray flips a few bits in the
> page cache? Hardly! The *only* time Posix makes any guarantees is if
> fsync(2) returns success. So the subject line, is in my opinion
> incorrect. The moment we are worrying about storage errors, and the
> user hasn't used fsync(2), Posix is no longer relevant for the
> purposes of the discussion.
>
> > The answer today is "it depends".
>
> And I think that's fine. The only way we can make any guarantees is
> if we do what Alan suggested, which is to imply that a read on a dirty
> page *block* until the the page is successfully written back. This
> would destroy performance. I know I wouldn't want to use such a
> system, and if someone were to propose it, I'd strongly argue for a
> switch to turn it *off*, and I suspect most system administators would
> turn it off once they saw what it did to system performance. (As a
> thought experiment, think about what it would do to kernel compiles.
> It means that before you link the .o files, you would have to block
> and wait for them to be written to disk so you could be sure the
> writeback would be successful. **Ugh**.)
>
> Given that many people would turn such a feature off once they saw
> what it does to their system performance, applications in general
> couldn't rely on it. which means applications who cared would have to
> do what they should have done all along. If it's precious data use
> fsync(2). If not, most of the time things are *fine* and it's not
> worth sacrificing performance for the corner cases unless it really is
> ultra-precious data and you are willing to pay the overhead.

Basically, the problem (as I see it) is that we can end up evicting
uncleanable data from the cache before you have a chance to call fsync,
and that means that the results of a read after a write are not
completely reliable.

We had some small discussion of this at LSF (mostly over malt beverages)
and wondered: could we offer a guarantee that uncleanable dirty data
will stick around until:

1) someone issues fsync() and scrapes the error

...or...

2) some timeout occurs (or we hit some other threshold? This part is
definitely open for debate)

That would at least allow an application issuing regular fsync calls to
reliably re-fetch write data via reads up until the point where we see
fsync fail. Those that don't issue regular fsyncs should be no worse off
than they are today.

Granted #2 above represents something of an open-ended commitment -- we
could have a bunch of writers that don't call fsync fill up memory with
uncleanable pages, and at that point we're sort of stuck.

That said, all of this is a rather theoretical problem. I've not heard
any reports of problems due to uncleanable data being evicted prior to
fsync, so I've not lept to start rolling patches for this.
--
Jeff Layton <jlayton@xxxxxxxxxx>