Re: [PATCH v4 3/3] mm/mempolicy: introduce MPOL_WEIGHTED_INTERLEAVE for weighted interleaving

From: Gregory Price
Date: Wed Jan 31 2024 - 21:01:34 EST


On Thu, Feb 01, 2024 at 09:55:07AM +0800, Huang, Ying wrote:
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index b1437396c357..dfd097009606 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -2391,7 +2391,7 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp,
> > unsigned long nr_allocated = 0;
> > unsigned long rounds;
> > unsigned long node_pages, delta;
> > - u8 __rcu *table, *weights, weight;
> > + u8 __rcu *table, __rcu *weights, weight;
>
> The __rcu usage can be checked with `sparse` directly. For example,
>
> make C=1 mm/mempolicy.o
>
> More details can be found in
>
> https://www.kernel.org/doc/html/latest/dev-tools/sparse.html
>
> Per my understanding, we shouldn't use "__rcu" here. Please search
> "__rcu" in the following document.
>
> https://www.kernel.org/doc/html/latest/RCU/checklist.html
>

Thanks for this, I will sort this out and respond here with changes
before v5.

> > @@ -2460,17 +2454,10 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp,
> > node_pages += weight;
> > delta -= weight;
> > } else if (delta) {
> > + /* when delta is deleted, resume from that node */
> ~~~~~~~
> depleted?

ack.

> > +retry:
> > + /* to prevent miscount use tsk->mems_allowed_seq to detect rebind */
> > + cpuset_mems_cookie = read_mems_allowed_begin();
> > if (!current->il_weight || !node_isset(node, policy->nodes)) {
> > node = next_node_in(node, policy->nodes);
>
> node will be changed in the loop. So we need to change the logic here.
>

Good catch, stupid mistake. ack.

> > @@ -2388,10 +2401,17 @@ static unsigned long alloc_pages_bulk_array_weighted_interleave(gfp_t gfp,
> > int prev_node;
> > int i;
> >
> > +
>
> Change by accident?
>

ack.

~Gregory