Re: [PATCH v3 00/11] Performance fixes for 9p filesystem

From: Eric Van Hensbergen
Date: Sat Feb 04 2023 - 16:39:20 EST


Okay, thanks for the additional detail - I have an idea of what the
problem might be.

As far as my tests - I've predominantly tested with dbench, fsx (with
and without mmap tests), postmark, and bonnie -- running single and
multithreaded. All seem to work fine and didn't report errors. I
thought the dbench trace was based on a build, but perhaps that's
inaccurate, or perhaps there's something peculiar with it being the
root file system (I have always just mounted it after boot, not tried
booting with it as root).

(thinking out loud)
In any case, I think the fact that we see that error when in readahead
mode is the key hint, because it means it thinks something is
writeback cached when it shouldn't be. The writeback is triggered by
the setattr, which always calls filemap_write_and_wait -- this is all
old code, not something added by the change. My assumption was that
if the inode had no dirty data (writebacks) then it would just not do
anything -- this should be the case in readahead mode. So we gotta
figure out why it thinks it has dirty data. Looking at the code where
the warning is printed, its a WARN_ONCE so its quite possible we are
hitting this left and right -- we can probably switch that to always
print to get an idea of this being the root cause. Need to add some
more debug code to understand what we think we are writing back as
anything there should have been flushed when the file was closed.
To your multithreaded concern, I suppose there could be a race between
flushing mmap writes and the setattr also calling writeback, but the
folio is supposed to be locked at this point so you think there would
only be one writeback. This will be easier to understand once I
reproduce and have a full trace and we know what file we are talking
about and what other operations might have been in flight.

There is a case in mmap that I was worried always required writeback,
but I did enough unit testing to convince myself that wasn't the case
-- so could be something down that path but will reproduce your
environment first and see if I can get the same types of error (I'm
most of the way there at this point, it is just we are digging out
from an ice storm here in texas so there's been more chainsawing than
coding....).

-eric

On Sat, Feb 4, 2023 at 7:40 AM Christian Schoenebeck
<linux_oss@xxxxxxxxxxxxx> wrote:
>
> On Friday, February 3, 2023 8:12:14 PM CET Eric Van Hensbergen wrote:
> > Hi Christian, thanks for the feedback -- will dig in and see if I can
> > find what's gone south here. Clearly my approach to writeback without
> > writeback_fid didn't cover all the corner cases and thats the cause of
> > the fault. Can I get a better idea of how to reproduce - you booted
> > with a root 9p file system, and then tried to build...what?
>
> KDE, which builds numerous packages, multi-threaded by default. In the past we
> had 9p issues which triggered only after hours of compiling, however in this
> case I don't think that you need to build something fancy. Because it already
> fails at the very beginning of any build process, just when detecting a
> compiler.
>
> May I ask what kind of scenario you have tested so far? It was not a multi-
> threaded context, right? Large chunk or small chunk I/O?
>
> > Performance degradation is interesting, runs counter to the
> > unit-testing and benchmarking I did, but I didn't do something as
> > logical as a build to check -- need to tease apart whether this is a
> > read problem, a write problem...or both. My intuition is that its on
> > the write side, but as part of going through the code I made the cache
> > code a lot more pessimistic so its possible I inadvertently killed an
> > optimistic optimization.
>
> I have not walked down the road to investigate individual I/O errors or even
> their cause yet, but from my feeling it could also be related to fid vs.
> writeback_fid. I saw you dropped a fix we made there last year, but haven't
> checked yet whether your changes would handle it correctly in another way.
>
> > Finally, just to clarify, the panic you had at the end happened with
> > readahead? Seems interesting because clearly it thought it was
> > writing back something that it shouldn't have been writing back (since
> > writeback caches weren't enabled). I'm thinking something was marked
> > as dirty even though the underlying system just wrote-through the
> > change and so the writeback isn't actually required. This may also be
> > an indicator of the performance issue if we are actually writing
> > through the data in addition to an unnecessary write-back (which I
> > also worry is writing back bad data in the second case).
>
> It was not a kernel panic. It's a warning that appears right after boot, but
> the system continues to run. So that warning is printed before starting the
> actual build process. And yes, the warning is printed with "readahead".
>
> > Can you give me an idea of what the other misbehaviors were?
>
> There were really all sorts of misbheaviour on application level, e.g. no
> command history being available from shell (arrow up/down), things hanging on
> the shell for a long time, error messages. And after the writeahead test the
> build directory was screwed, i.e. even after rebooting with a regular kernel
> things no longer built correctly, so I had to restore a snapshot.
>
> Best regards,
> Christian Schoenebeck
>
>