Re: [PATCH v2 0/5] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

From: Aneesh Kumar K V
Date: Mon Apr 25 2022 - 04:55:21 EST


On 4/25/22 1:39 PM, Aneesh Kumar K V wrote:
On 4/25/22 11:40 AM, ying.huang@xxxxxxxxx wrote:
On Mon, 2022-04-25 at 09:20 +0530, Aneesh Kumar K.V wrote:
"ying.huang@xxxxxxxxx" <ying.huang@xxxxxxxxx> writes:

Hi, All,

On Fri, 2022-04-22 at 16:30 +0530, Jagdish Gediya wrote:

[snip]

I think it is necessary to either have per node demotion targets
configuration or the user space interface supported by this patch
series. As we don't have clear consensus on how the user interface
should look like, we can defer the per node demotion target set
interface to future until the real need arises.

Current patch series sets N_DEMOTION_TARGET from dax device kmem
driver, it may be possible that some memory node desired as demotion
target is not detected in the system from dax-device kmem probe path.

It is also possible that some of the dax-devices are not preferred as
demotion target e.g. HBM, for such devices, node shouldn't be set to
N_DEMOTION_TARGETS. In future, Support should be added to distinguish
such dax-devices and not mark them as N_DEMOTION_TARGETS from the
kernel, but for now this user space interface will be useful to avoid
such devices as demotion targets.

We can add read only interface to view per node demotion targets
from /sys/devices/system/node/nodeX/demotion_targets, remove
duplicated /sys/kernel/mm/numa/demotion_target interface and instead
make /sys/devices/system/node/demotion_targets writable.

Huang, Wei, Yang,
What do you suggest?

We cannot remove a kernel ABI in practice.  So we need to make it right
at the first time.  Let's try to collect some information for the kernel
ABI definitation.

The below is just a starting point, please add your requirements.

1. Jagdish has some machines with DRAM only NUMA nodes, but they don't
want to use that as the demotion targets.  But I don't think this is a
issue in practice for now, because demote-in-reclaim is disabled by
default.

It is not just that the demotion can be disabled. We should be able to
use demotion on a system where we can find DRAM only NUMA nodes. That
cannot be achieved by /sys/kernel/mm/numa/demotion_enabled. It needs
something similar to to N_DEMOTION_TARGETS


Can you show NUMA information of your machines with DRAM-only nodes and
PMEM nodes?  We can try to find the proper demotion order for the
system.  If you can not show it, we can defer N_DEMOTION_TARGETS until
the machine is available.


Sure will find one such config. As you might have noticed this is very easy to have in a virtualization setup because the hypervisor can assign memory to a guest VM from a numa node that doesn't have CPU assigned to the same guest. This depends on the other guest VM instance config running on the system. So on any virtualization config that has got persistent memory attached, this can become an easy config to end up with.



something like this

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13392 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1971 MB
node distances:
node 0 1
0: 10 40
1: 40 10
$ cat /sys/bus/nd/devices/dax0.0/target_node
2
$
# cd /sys/bus/dax/drivers/
:/sys/bus/dax/drivers# ls
device_dax kmem
:/sys/bus/dax/drivers# cd device_dax/
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > ../kmem/new_id
:/sys/bus/dax/drivers/device_dax# numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13380 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1961 MB
node 2 cpus:
node 2 size: 0 MB
node 2 free: 0 MB
node distances:
node 0 1 2
0: 10 40 80
1: 40 10 80
2: 80 80 10
:/sys/bus/dax/drivers/device_dax#