Re: [PATCH v3 4/4] mm/mempolicy: change cur_il_weight to atomic and carry the node with it

From: Huang, Ying
Date: Tue Jan 30 2024 - 00:20:51 EST


Gregory Price <gregory.price@xxxxxxxxxxxx> writes:

> On Tue, Jan 30, 2024 at 11:15:35AM +0800, Huang, Ying wrote:
>> Gregory Price <gregory.price@xxxxxxxxxxxx> writes:
>>
>> > On Mon, Jan 29, 2024 at 10:48:47AM -0500, Gregory Price wrote:
>> >> On Mon, Jan 29, 2024 at 04:17:46PM +0800, Huang, Ying wrote:
>> >> > Gregory Price <gregory.price@xxxxxxxxxxxx> writes:
>> >> >
>> >> > But, in contrast, it's bad to put task-local "current weight" in
>> >> > mempolicy. So, I think that it's better to move cur_il_weight to
>> >> > task_struct. And maybe combine it with current->il_prev.
>> >> >
>> >> Style question: is it preferable add an anonymous union into task_struct:
>> >>
>> >> union {
>> >> short il_prev;
>> >> atomic_t wil_node_weight;
>> >> };
>> >>
>> >> Or should I break out that union explicitly in mempolicy.h?
>> >>
>> >
>> > Having attempted this, it looks like including mempolicy.h into sched.h
>> > is a non-starter. There are build issues likely associated from the
>> > nested include of uapi/linux/mempolicy.h
>> >
>> > So I went ahead and did the following. Style-wise If it's better to just
>> > integrate this as an anonymous union in task_struct, let me know, but it
>> > seemed better to add some documentation here.
>> >
>> > I also added static get/set functions to mempolicy.c to touch these
>> > values accordingly.
>> >
>> > As suggested, I changed things to allow 0-weight in il_prev.node_weight
>> > adjusted the logic accordingly. Will be testing this for a day or so
>> > before sending out new patches.
>> >
>>
>> Thanks about this again. It seems that we don't need to touch
>> task->il_prev and task->il_weight during rebinding for weighted
>> interleave too.
>>
>
> It's not clear to me this is the case. cpusets takes the task_lock to
> change mems_allowed and rebind task->mempolicy, but I do not see the
> task lock access blocking allocations.
>
> Comments from cpusets suggest allocations can happen in parallel.
>
> /*
> * cpuset_change_task_nodemask - change task's mems_allowed and mempolicy
> * @tsk: the task to change
> * @newmems: new nodes that the task will be set
> *
> * We use the mems_allowed_seq seqlock to safely update both tsk->mems_allowed
> * and rebind an eventual tasks' mempolicy. If the task is allocating in
> * parallel, it might temporarily see an empty intersection, which results in
> * a seqlock check and retry before OOM or allocation failure.
> */
>
>
> For normal interleave, this isn't an issue because it always proceeds to
> the next node. The same is not true of weighted interleave, which may
> have a hanging weight in task->il_weight.

So, I added a check as follows,

node_isset(current->il_prev, policy->nodes)

If prev node is removed from nodemask, allocation will proceed to the
next node. Otherwise, it's safe to use current->il_weight.

--
Best Regards,
Huang, Ying

> That is why I looked to combine the two, so at least node/weight were
> carried together.
>
>> unsigned int weighted_interleave_nodes(struct mempolicy *policy)
>> {
>> unsigned int nid;
>> struct task_struct *me = current;
>>
>> nid = me->il_prev;
>> if (!me->il_weight || !node_isset(nid, policy->nodes)) {
>> nid = next_node_in(...);
>> me->il_prev = nid;
>> me->il_weight = weights[nid];
>> }
>> me->il_weight--;
>>
>> return nid;
>> }
>
> I ended up with this:
>
> static unsigned int weighted_interleave_nodes(struct mempolicy *policy)
> {
> unsigned int node;
> u8 weight;
>
> get_wil_prev(&node, &weight);
> /* If nodemask was rebound, just fetch the next node */
> if (!weight) {
> node = next_node_in(node, policy->nodes);
> /* can only happen if nodemask has become invalid */
> if (node == MAX_NUMNODES)
> return node;
> weight = get_il_weight(node);
> }
> weight--;
> set_wil_prev(node, weight);
> return node;
> }
>
> ~Gregory