RE: [PATCH v2 21/24] selftests/resctrl: Read in less obvious order to defeat prefetch optimizations

From: Shaopeng Tan (Fujitsu)
Date: Thu Jun 01 2023 - 02:16:03 EST


Hi Ilpo,

> > > When reading memory in order, HW prefetching optimizations will
> > > interfere with measuring how caches and memory are being accessed.
> > > This adds noise into the results.
> > >
> > > Change the fill_buf reading loop to not use an obvious in-order
> > > access using multiply by a prime and modulo.
> > >
> > > Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@xxxxxxxxxxxxxxx>
> > > ---
> > > tools/testing/selftests/resctrl/fill_buf.c | 17 ++++++++++-------
> > > 1 file changed, 10 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/resctrl/fill_buf.c
> > > b/tools/testing/selftests/resctrl/fill_buf.c
> > > index 7e0d3a1ea555..049a520498a9 100644
> > > --- a/tools/testing/selftests/resctrl/fill_buf.c
> > > +++ b/tools/testing/selftests/resctrl/fill_buf.c
> > > @@ -88,14 +88,17 @@ static void *malloc_and_init_memory(size_t s)
> > >
> > > static int fill_one_span_read(unsigned char *start_ptr, unsigned
> > > char
> > > *end_ptr) {
> > > - unsigned char sum, *p;
> > > -
> > > + unsigned int size = (end_ptr - start_ptr) / (CL_SIZE / 2);
> > > + unsigned int count = size;
> > > + unsigned char sum;
> > > +
> > > + /*
> > > + * Read the buffer in an order that is unexpected by HW prefetching
> > > + * optimizations to prevent them interfering with the caching pattern.
> > > + */
> > > sum = 0;
> > > - p = start_ptr;
> > > - while (p < end_ptr) {
> > > - sum += *p;
> > > - p += (CL_SIZE / 2);
> > > - }
> > > + while (count--)
> > > + sum += start_ptr[((count * 59) % size) * CL_SIZE / 2];
> >
> > Could you please elaborate why 59 is used?
>
> The main reason is that it's a prime number ensuring the whole buffer gets read.
> I picked something that doesn't make it to wrap on almost every iteration.

Thanks for your explanation. It seems there is no problem.

Perhaps you have already tested this patch in your environment and got a test result of "ok".
Because HW prefetching does not work well,
the IMC counter fluctuates a lot in my environment,
and the test result is "not ok".

In order to ensure this test set runs in any environments and gets "ok",
would you consider changing the value of MAX_DIFF_PERCENT of each test?
or changing something else?

```
Environment:
Kernel: 6.4.0-rc2
CPU: Intel(R) Xeon(R) Gold 6254 CPU @ 3.10GHz

Test result(MBM as an example):
# # Starting MBM BW change ...
# # Mounting resctrl to "/sys/fs/resctrl"
# # Benchmark PID: 8671
# # Writing benchmark parameters to resctrl FS
# # Write schema "MB:0=100" to resctrl FS
# # Checking for pass/fail
# # Fail: Check MBM diff within 5%
# # avg_diff_per: 9%
# # Span in bytes: 262144000
# # avg_bw_imc: 6202
# # avg_bw_resc: 5585
# not ok 1 MBM: bw change
```

Best regards,
Shaopeng TAN