Re: [PATCH v6] hugetlb: Add hugetlb.*.numa_stat file

From: Shakeel Butt
Date: Sat Nov 13 2021 - 14:15:20 EST


On Sat, Nov 13, 2021 at 6:48 AM Mina Almasry <almasrymina@xxxxxxxxxx> wrote:
>
> On Fri, Nov 12, 2021 at 6:45 PM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
> >
> > On Sat, Nov 13, 2021 at 7:36 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
> > >
> > > Subject: Re: [PATCH v6] hugetlb: Add hugetlb.*.numa_stat file
> > >
> > > To: Muchun Song <songmuchun@xxxxxxxxxxxxx>, Mina Almasry <almasrymina@xxxxxxxxxx>
> > >
> > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Shuah Khan <shuah@xxxxxxxxxx>, Miaohe Lin <linmiaohe@xxxxxxxxxx>, Oscar Salvador <osalvador@xxxxxxx>, Michal Hocko <mhocko@xxxxxxxx>, David Rientjes <rientjes@xxxxxxxxxx>, Shakeel Butt <shakeelb@xxxxxxxxxx>, Jue Wang <juew@xxxxxxxxxx>, Yang Yao <ygyao@xxxxxxxxxx>, Joanna Li <joannali@xxxxxxxxxx>, Cannon Matthews <cannonmatthews@xxxxxxxxxx>, Linux Memory Management List <linux-mm@xxxxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>
> > >
> > > Bcc:
> > >
> > > -=-=-=-=-=-=-=-=-=# Don't remove this line #=-=-=-=-=-=-=-=-=-
> > >
> > > On 11/10/21 6:36 PM, Muchun Song wrote:
> > >
> > > > On Thu, Nov 11, 2021 at 9:50 AM Mina Almasry <almasrymina@xxxxxxxxxx> wrote:
> > >
> > > >>
> > >
> > > >> +struct hugetlb_cgroup_per_node {
> > >
> > > >> + /* hugetlb usage in pages over all hstates. */
> > >
> > > >> + atomic_long_t usage[HUGE_MAX_HSTATE];
> > >
> > > >
> > >
> > > > Why do you use atomic? IIUC, 'usage' is always
> > >
> > > > increased/decreased under hugetlb_lock except
> > >
> > > > hugetlb_cgroup_read_numa_stat() which is always
> > >
> > > > reading it. So I think WRITE_ONCE/READ_ONCE
> > >
> > > > is enough.
> > >
> > >
> > >
> > > Thanks for continuing to work this, I was traveling and unable to
> > >
> > > comment.
> >
> > Have a good time.
> >
> > >
> > >
> > >
> > > Unless I am missing something, I do not see a reason for WRITE_ONCE/READ_ONCE
> >
> > Because __hugetlb_cgroup_commit_charge and
> > hugetlb_cgroup_read_numa_stat can run parallely,
> > which meets the definition of data race. I believe
> > KCSAN could report this race. I'm not strongly
> > suggest using WRITE/READ_ONCE() here. But
> > in theory it should be like this. Right?
> >
>
> My understanding is that the (only) potential problem here is
> read_numa_stat() reading an intermediate garbage value while
> commit_charge() is happening concurrently. This will only happen on
> archs where the writes to an unsigned long aren't atomic. On archs
> where writes to an unsigned long are atomic, there is no race, because
> read_numa_stat() will only ever read the value before the concurrent
> write or after the concurrent write, both of which are valid. To cater
> to archs where the writes to unsigned long aren't atomic, we need to
> use an atomic data type.
>
> I'm not too familiar but my understanding from reading the
> documentation is that WRITE_ONCE/READ_ONCE don't contribute anything
> meaningful here:
>
> /*
> * Prevent the compiler from merging or refetching reads or writes. The
> * compiler is also forbidden from reordering successive instances of
> * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some
> * particular ordering. One way to make the compiler aware of ordering is to
> * put the two invocations of READ_ONCE or WRITE_ONCE in different C
> * statements.
> ...
>
> I can't see a reason why we care about the compiler merging or
> refetching reads or writes here. As far as I can tell the problem is
> atomicy of the write.
>

We have following options:

1) Use atomic type for usage.
2) Use "unsigned long" for usage along with WRITE_ONCE/READ_ONCE.
3) Use hugetlb_lock for hugetlb_cgroup_read_numa_stat as well.

All options are valid but we would like to avoid (3).

What if we use "unsigned long" type but without READ_ONCE/WRITE_ONCE.
The potential issues with that are KCSAN will report this as race and
possible garbage value on archs which do not support atomic writes to
unsigned long.

Shakeel