Re: [linus:master] [xfs] 2edf06a50f: fsmark.files_per_sec -5.7% regression

From: Dave Chinner
Date: Fri May 12 2023 - 19:05:14 EST


On Fri, May 12, 2023 at 03:44:29PM +0800, Oliver Sang wrote:
> hi, Dave Chinner,
>
> On Tue, May 09, 2023 at 05:10:53PM +1000, Dave Chinner wrote:
> > On Tue, May 09, 2023 at 04:54:33PM +1000, Dave Chinner wrote:
> > > On Tue, May 09, 2023 at 10:13:19AM +0800, kernel test robot wrote:
> > > >
> > > >
> > > > Hello,
> > > >
> > > > kernel test robot noticed a -5.7% regression of fsmark.files_per_sec on:
> > > >
> > > >
> > > > commit: 2edf06a50f5bbe664283f3c55c480fc013221d70 ("xfs: factor xfs_alloc_vextent_this_ag() for _iterate_ags()")
> > > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> > >
> > > This is just a refactoring patch and doesn't change any logic.
> > > Hence I'm sceptical that it actually resulted in a performance
> > > regression. Indeed, the profile indicates a significant change of
> > > behaviour in the allocator and I can't see how the commit above
> > > would cause anything like that.
> > >
> > > Was this a result of a bisect? If so, what were the original kernel
> > > versions where the regression was detected?
> >
> > Oh, CONFIG_XFS_DEBUG=y, which means:
> >
> > static int
> > xfs_alloc_ag_vextent_lastblock(
> > struct xfs_alloc_arg *args,
> > struct xfs_alloc_cur *acur,
> > xfs_agblock_t *bno,
> > xfs_extlen_t *len,
> > bool *allocated)
> > {
> > int error;
> > int i;
> >
> > #ifdef DEBUG
> > /* Randomly don't execute the first algorithm. */
> > if (get_random_u32_below(2))
> > return 0;
> > #endif
> >
> > We randomly chose a near block allocation strategy to use to improve
> > code coverage, not the optimal one for IO performance. Hence the CPU
> > usage and allocation patterns that impact IO performance are simply
> > not predictable or reproducable from run to run. So, yeah, trying to
> > bisect a minor difference in performance as a result of this
> > randomness will not be reliable....
>
> Thanks a lot for guidance!
>
> we plan to disable XFS_DEBUG (as well as XFS_WARN) in our performance tests.
> want to consult with you if this is the correct thing to do?

You can use XFS_WARN=y with performance tests - that elides all the
debug specific code that changes behaviour but leaves all the
ASSERT-based correctness checks in the code.

> and I guess we should still keep them in functional tests, am I right?

Yes.

> BTW, regarding this case, we tested again with disabling XFS_DEBUG (as well as
> XFS_WARN), kconfig is attached, only diff with last time is:
> -CONFIG_XFS_DEBUG=y
> -CONFIG_XFS_ASSERT_FATAL=y
> +# CONFIG_XFS_WARN is not set
> +# CONFIG_XFS_DEBUG is not set
>
> but we still observed similar regression:
>
> ecd788a92460eef4 2edf06a50f5bbe664283f3c55c4
> ---------------- ---------------------------
> %stddev %change %stddev
> \ | \
> 8176057 ± 15% +4.7% 8558110 fsmark.app_overhead
> 14484 -6.3% 13568 fsmark.files_per_sec

So the application spent 5% more CPU time in userspace, and the rate
the kernel processed IO went down by 6%. Seems to me like
everything is running slower, not just the kernel code....

> 100.50 ± 5% +0.3% 100.83 turbostat.Avg_MHz
> 5.54 ± 11% +0.3 5.82 turbostat.Busy%
> 1863 ± 19% -6.9% 1733 turbostat.Bzy_MHz

Evidence that the CPU is running at a 7% lower clock rate when the
results are 6% slower is a bit suspicious to me. Shouldn't the CPU
clock rate be fixed to the same value for A-B performance regression
testing?

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx