Re: [PATCH v2 08/10] cpuset: allow to keep tasks in empty cpusets

From: Li Zefan
Date: Thu Jun 06 2013 - 06:27:22 EST


On 2013/6/6 4:51, Tejun Heo wrote:
> Hello, Li.
>
> On Wed, Jun 05, 2013 at 05:16:59PM +0800, Li Zefan wrote:
>> @@ -2092,11 +2183,13 @@ static void cpuset_propagate_hotplug_workfn(struct work_struct *work)
>> mutex_unlock(&cpuset_mutex);
>>
>> /*
>> - * If @cs became empty, move tasks to the nearest ancestor with
>> - * execution resources. This is full cgroup operation which will
>> + * If sane_behavior flag is set, we'll keep tasks in empty cpusets.
>> + *
>> + * Otherwise move tasks to the nearest ancestor with execution
>> + * resources. This is full cgroup operation which will
>> * also call back into cpuset. Should be done outside any lock.
>> */
>> - if (is_empty)
>> + if (!sane && is_empty)
>> remove_tasks_in_empty_cpuset(cs);
>>
>> /* the following may free @cs, should be the last operation */
>> @@ -2171,6 +2264,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
>> cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
>> mutex_unlock(&callback_mutex);
>> /* we don't mess with cpumasks of tasks in top_cpuset */
>> + update_tasks_cpumask_hier(&top_cpuset, false, NULL);
>> }
>
> I'm a little confused by the order of operation. We now have two
> different hierarchical walks for hotplug propagation, right? I
> suppose the above one is added because we now also need to update the
> mask when cpus are being brought online?
>

The first one will only update tasks in empty cpusets (no matter online or
offline), and the second one will only update tasks in non-empty cpusets
(only when offline).

> I wonder whether it'd be possible to merge the two paths. My
> suspicion is that we probably don't need propagate_hotplug_work
> anymore now that we can drop RCU read lock while doing the pre-order
> walk. What do you think?
>

It indeed can be confusing. I'll see if we can make the code clearer.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/