Re: [RFC PATCH] sched: Improve scalability of select_idle_sibling using SMT balance

From: Subhra Mazumdar
Date: Wed Dec 20 2017 - 17:28:21 EST




On 12/19/2017 11:36 AM, Peter Zijlstra wrote:
On Fri, Dec 08, 2017 at 12:07:54PM -0800, subhra mazumdar wrote:
+static inline void
+sd_context_switch(struct sched_domain *sd, struct rq *rq, int util)
+{
+ struct sched_group *sg_cpu;
+
+ /* atomically add/subtract the util */
+ sg_cpu = sd->sg_cpu;
+ if (util > 0)
+ atomic_inc(
+ (atomic_t *)(&(sg_cpu->utilization)));
+ else
+ atomic_dec(
+ (atomic_t *)(&(sg_cpu->utilization)));
Whahah, lol, no!
This patch solves the scalability problem of potentially searching all
cores or cpus in current logic while scheduling a thread. It does so by
using a randomized approach. We can determine a better core to schedule by
comparing the SMT utilization (i.e. how many CPUs are busy in a core)
between the 2 cores of a llc domain. The comparison is done between the
core of the target cpu passed to select_idle_sibling and a random core.
This comparison can be done in O(1) if we have an accurate accounting. The
SMT utilization is accounted during context switch and for that the atomic
operation is needed for correctness.

The cost of atomic increment/decrement is minimal as it happens only for
levels down from llc, i.e only for cores. My data for context_switch() time
(I had sent it in the patch) shows the extra cost is minimal and mostly
within noise margin.

I am open to any suggestions on better way of doing it.

+}
+
/*
* context_switch - switch to the new MM and the new thread's register state.
*/
@@ -2751,6 +2766,51 @@ context_switch(struct rq *rq, struct task_struct *prev,
struct task_struct *next, struct rq_flags *rf)
{
struct mm_struct *mm, *oldmm;
+ int this_cpu = rq->cpu;
+ struct sched_domain *sd;
+ unsigned int cond;
+
+ cond = ((prev != rq->idle) << 1) | (next != rq->idle);
+ sd = rcu_dereference(per_cpu(sd_llc, this_cpu));
That one is RCU, not RCU-sched protected..
Ok

+ /*
+ * From sd_llc downward update the SMT utilization.
+ * Skip the lowest level 0.
+ */
+ for_each_lower_domain(sd) {
+ if (sd->level == 0)
+ break;
+ if (rq->initial_util == UTIL_UNINITIALIZED) {
+ switch (cond) {
+ case PREV_IDLE_NEXT_NIDLE:
+ case PREV_NIDLE_NEXT_NIDLE:
+ sd_context_switch(sd, rq, SMT_THREAD_UTIL);
+ break;
+ case PREV_NIDLE_NEXT_IDLE:
+ case PREV_IDLE_NEXT_IDLE:
+ break;
+ }
+ } else {
+ switch (cond) {
+ case PREV_IDLE_NEXT_NIDLE:
+ sd_context_switch(sd, rq, SMT_THREAD_UTIL);
+ break;
+ case PREV_NIDLE_NEXT_IDLE:
+ sd_context_switch(sd, rq, -SMT_THREAD_UTIL);
+ break;
+ case PREV_IDLE_NEXT_IDLE:
+ case PREV_NIDLE_NEXT_NIDLE:
+ break;
+ }
+ }
+ }
+
+ if (sd) {
+ if (next == rq->idle)
+ rq->initial_util = UTIL_IDLE;
+ else
+ rq->initial_util = UTIL_BUSY;
+ }
WTH do you even think this is reasonable?
I will clean this part up.

prepare_task_switch(rq, prev, next);
And I still have no idea what the patch does, but I can't be bothered to
reverse engineer it just now.
I will improve the changelog to explain the logic better in v2.

Thanks,
Subhra