Re: [PATCH v1] arch_topology: Make cpu_capacity sysfs node as ready-only

From: Juri Lelli
Date: Thu Mar 07 2019 - 02:29:05 EST


Hi,

On 06/03/19 20:57, Lingutla Chandrasekhar wrote:
> If user updates any cpu's cpu_capacity, then the new value is going to
> be applied to all its online sibling cpus. But this need not to be correct
> always, as sibling cpus (in ARM, same micro architecture cpus) would have
> different cpu_capacity with different performance characteristics.
> So updating the user supplied cpu_capacity to all cpu siblings
> is not correct.
>
> And another problem is, current code assumes that 'all cpus in a cluster
> or with same package_id (core_siblings), would have same cpu_capacity'.
> But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
> cpu topology sibling masks")', when a cpu hotplugged out, the cpu
> information gets cleared in its sibling cpus. So user supplied
> cpu_capacity would be applied to only online sibling cpus at the time.
> After that, if any cpu hot plugged in, it would have different cpu_capacity
> than its siblings, which breaks the above assumption.
>
> So instead of mucking around the core sibling mask for user supplied
> value, use device-tree to set cpu capacity. And make the cpu_capacity
> node as read-only to know the assymetry between cpus in the system.
>
> Signed-off-by: Lingutla Chandrasekhar <clingutla@xxxxxxxxxxxxxx>
> ---
> drivers/base/arch_topology.c | 33 +--------------------------------
> 1 file changed, 1 insertion(+), 32 deletions(-)
>
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index edfcf8d..d455897 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -7,7 +7,6 @@
> */
>
> #include <linux/acpi.h>
> -#include <linux/arch_topology.h>
> #include <linux/cpu.h>
> #include <linux/cpufreq.h>
> #include <linux/device.h>
> @@ -51,37 +50,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
> static void update_topology_flags_workfn(struct work_struct *work);
> static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
>
> -static ssize_t cpu_capacity_store(struct device *dev,
> - struct device_attribute *attr,
> - const char *buf,
> - size_t count)
> -{
> - struct cpu *cpu = container_of(dev, struct cpu, dev);
> - int this_cpu = cpu->dev.id;
> - int i;
> - unsigned long new_capacity;
> - ssize_t ret;
> -
> - if (!count)
> - return 0;
> -
> - ret = kstrtoul(buf, 0, &new_capacity);
> - if (ret)
> - return ret;
> - if (new_capacity > SCHED_CAPACITY_SCALE)
> - return -EINVAL;
> -
> - mutex_lock(&cpu_scale_mutex);
> - for_each_cpu(i, &cpu_topology[this_cpu].core_sibling)
> - topology_set_cpu_scale(i, new_capacity);
> - mutex_unlock(&cpu_scale_mutex);
> -
> - schedule_work(&update_topology_flags_work);
> -
> - return count;
> -}
> -
> -static DEVICE_ATTR_RW(cpu_capacity);
> +static DEVICE_ATTR_RO(cpu_capacity);

There are cases in which this needs to be RW, as recently discussed
https://lore.kernel.org/lkml/20181123135807.GA14964@e107155-lin/

IMHO, if the core_sibling assumption doesn't work in all cases, one
should be looking into fixing it, rather than making this RO.

Best,

- Juri