Re: [PATCH 5/8] sched, numa, mm: Add adaptive NUMA affinitysupport

From: Christoph Lameter
Date: Mon Nov 12 2012 - 19:02:32 EST



On Mon, 12 Nov 2012, Peter Zijlstra wrote:

> We define 'shared memory' as all user memory that is frequently
> accessed by multiple tasks and conversely 'private memory' is
> the user memory used predominantly by a single task.

"All"? Should that not be "a memory segment that is frequently..."?

> Using this, we can construct two per-task node-vectors, 'S_i'
> and 'P_i' reflecting the amount of shared and privately used
> pages of this task respectively. Pages for which two consecutive
> 'hits' are of the same cpu are assumed private and the others
> are shared.

The classification is per task? But most tasks have memory areas
that are private and other areas where shared accesses occur. Can that be
per memory area? Private areas need to be kept with the process. Shared
areas may have to be spread across nodes if the memory area is too large.

Guess that is too complicated to determine unless we would be using vmas
which may only roughly correlate to the memory regions for which memory
policies are currently manually setup.

But then this is rather different from my expectations that I had after
reading the intro.

> We also add an extra 'lateral' force to the load balancer that
> perturbs the state when otherwise 'fairly' balanced. This
> ensures we don't get 'stuck' in a state which is fair but
> undesired from a memory location POV (see can_do_numa_run()).

We do useless moves and create additional overhead?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/