Re: [PATCH v2 18/18] x86/resctrl: Separate arch and fs resctrl locks

From: Reinette Chatre
Date: Thu Feb 02 2023 - 18:51:00 EST


Hi James,

On 1/13/2023 9:54 AM, James Morse wrote:
> resctrl has one mutex that is taken by the architecture specific code,
> and the filesystem parts. The two interact via cpuhp, where the
> architecture code updates the domain list. Filesystem handlers that
> walk the domains list should not run concurrently with the cpuhp
> callback modifying the list.
>
> Exposing a lock from the filesystem code means the interface is not
> cleanly defined, and creates the possibility of cross-architecture
> lock ordering headaches. The interaction only exists so that certain
> filesystem paths are serialised against cpu hotplug. The cpu hotplug
> code already has a mechanism to do this using cpus_read_lock().
>
> MPAM's monitors have an overflow interrupt, so it needs to be possible
> to walk the domains list in irq context. RCU is ideal for this,
> but some paths need to be able to sleep to allocate memory.
>
> Because resctrl_{on,off}line_cpu() take the rdtgroup_mutex as part
> of a cpuhp callback, cpus_read_lock() must always be taken first.
> rdtgroup_schemata_write() already does this.
>
> All but one of the filesystem code's domain list walkers are
> currently protected by the rdtgroup_mutex taken in
> rdtgroup_kn_lock_live(). The exception is rdt_bit_usage_show()
> which takes the lock directly.

The new BMEC code also. You can find it on tip's x86/cache branch,
see mbm_total_bytes_config_write() and mbm_local_bytes_config_write().

>
> Make the domain list protected by RCU. An architecture-specific
> lock prevents concurrent writers. rdt_bit_usage_show() can
> walk the domain list under rcu_read_lock().
> The other filesystem list walkers need to be able to sleep.
> Add cpus_read_lock() to rdtgroup_kn_lock_live() so that the
> cpuhp callbacks can't be invoked when file system operations are
> occurring.
>
> Add lockdep_assert_cpus_held() in the cases where the
> rdtgroup_kn_lock_live() call isn't obvious.
>
> Resctrl's domain online/offline calls now need to take the
> rdtgroup_mutex themselves.
>
> Tested-by: Shaopeng Tan <tan.shaopeng@xxxxxxxxxxx>
> Signed-off-by: James Morse <james.morse@xxxxxxx>
> ---
> arch/x86/kernel/cpu/resctrl/core.c | 33 ++++++++------
> arch/x86/kernel/cpu/resctrl/ctrlmondata.c | 14 ++++--
> arch/x86/kernel/cpu/resctrl/monitor.c | 3 ++
> arch/x86/kernel/cpu/resctrl/pseudo_lock.c | 3 ++
> arch/x86/kernel/cpu/resctrl/rdtgroup.c | 54 ++++++++++++++++++++---
> include/linux/resctrl.h | 2 +-
> 6 files changed, 84 insertions(+), 25 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
> index 7896fcf11df6..dc1ba580c4db 100644
> --- a/arch/x86/kernel/cpu/resctrl/core.c
> +++ b/arch/x86/kernel/cpu/resctrl/core.c
> @@ -25,8 +25,14 @@
> #include <asm/resctrl.h>
> #include "internal.h"
>
> -/* Mutex to protect rdtgroup access. */
> -DEFINE_MUTEX(rdtgroup_mutex);
> +/*
> + * rdt_domain structures are kfree()d when their last cpu goes offline,
> + * and allocated when the first cpu in a new domain comes online.
> + * The rdt_resource's domain list is updated when this happens. The domain
> + * list is protected by RCU, but callers can also take the cpus_read_lock()
> + * to prevent modification if they need to sleep. All writers take this mutex:

Using "callers can" is not specific (compare to "callers should"). Please provide
clear guidance on how the locks should be used. Reader may wonder "why take cpus_read_lock()
to prevent modification, why not just take the mutex to prevent modification?"

> + */
> +static DEFINE_MUTEX(domain_list_lock);
>
> /*
> * The cached resctrl_pqr_state is strictly per CPU and can never be
> @@ -483,6 +489,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
> struct rdt_domain *d;
> int err;
>
> + lockdep_assert_held(&domain_list_lock);
> +
> d = rdt_find_domain(r, id, &add_pos);
> if (IS_ERR(d)) {
> pr_warn("Couldn't find cache id for CPU %d\n", cpu);
> @@ -516,11 +524,12 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
> return;
> }
>
> - list_add_tail(&d->list, add_pos);
> + list_add_tail_rcu(&d->list, add_pos);
>
> err = resctrl_online_domain(r, d);
> if (err) {
> - list_del(&d->list);
> + list_del_rcu(&d->list);
> + synchronize_rcu();
> domain_free(hw_dom);
> }
> }
> @@ -541,7 +550,8 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r)
> cpumask_clear_cpu(cpu, &d->cpu_mask);
> if (cpumask_empty(&d->cpu_mask)) {
> resctrl_offline_domain(r, d);
> - list_del(&d->list);
> + list_del_rcu(&d->list);
> + synchronize_rcu();
>
> /*
> * rdt_domain "d" is going to be freed below, so clear

Should domain_remove_cpu() also get a "lockdep_assert_held(&domain_list_lock)"?

> @@ -569,30 +579,27 @@ static void clear_closid_rmid(int cpu)
> static int resctrl_arch_online_cpu(unsigned int cpu)
> {
> struct rdt_resource *r;
> - int err;
>
> - mutex_lock(&rdtgroup_mutex);
> + mutex_lock(&domain_list_lock);
> for_each_capable_rdt_resource(r)
> domain_add_cpu(cpu, r);
> clear_closid_rmid(cpu);
> + mutex_unlock(&domain_list_lock);

Why is clear_closid_rmid(cpu) protected by mutex?

>
> - err = resctrl_online_cpu(cpu);
> - mutex_unlock(&rdtgroup_mutex);
> -
> - return err;
> + return resctrl_online_cpu(cpu);
> }
>
> static int resctrl_arch_offline_cpu(unsigned int cpu)
> {
> struct rdt_resource *r;
>
> - mutex_lock(&rdtgroup_mutex);
> resctrl_offline_cpu(cpu);
>
> + mutex_lock(&domain_list_lock);
> for_each_capable_rdt_resource(r)
> domain_remove_cpu(cpu, r);
> clear_closid_rmid(cpu);
> - mutex_unlock(&rdtgroup_mutex);
> + mutex_unlock(&domain_list_lock);

Same

>
> return 0;
> }

Reinette