Re: New NUMA scheduler and hotplug CPU

From: Nick Piggin
Date: Mon Jan 26 2004 - 19:15:26 EST




Andrew Theurer wrote:

To me, it'd make more sense to add the CPUs to the scheduler structures
as they get brought online. I can also imagine machines where you have
a massive (infinite?) variety of possible CPUs that could appear -
like an NUMA box where you could just plug arbitrary numbers of new
nodes in as you wanted.

I guess so, but you'd still need NR_CPUS to be >= that arbitrary
number.


Moreover, as the CPUs aren't fixed numbers in advance, how are you going
to know which node to put them in, etc? Setting up every possible thing
in advance seems like an infeasible way to do hotplug to me.

Well this would be the problem. I guess its quite possible that
one doesn't know the topology of newly added CPUs before hand.

Well OK, this would require a per architecture function to handle
CPU hotplug. It could possibly just default to arch_init_sched_domains,
and just completely reinitialise everything which would be the simplest.


Call me crazy, but why not let the topology be determined via userspace at a more appropriate time? When you hotplug, you tell it where in the scheduler to plug it. Have structures in the scheduler which represent the nodes-runqueues-cpus topology (in the past I tried a node/rq/cpu structs with simple pointers), but let the topology be built based on user's desires thru hotplug.


Well isn't userspace's idea of topology just what the kernel tells it?
I'm not sure what it would buy you... but I guess it wouldn't be too
much harder than doing it in kernel, just a matter of making the userspace
API.

BTW. I guess you haven't seen my sched domains code. It can describe
arbitrary topologies.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/