Re: [PATCHv4 2/2] powerpc: implement arch_scale_smt_power for Power7

From: Joel Schopp
Date: Thu Feb 18 2010 - 11:28:35 EST


Sorry for the slow reply, was on vacation. Mikey seems to have answered pretty well though.

That is, unless these threads 2 and 3 really are _that_ weak, at which
point one wonders why IBM bothered with the silicon ;-)
Peter,

2 & 3 aren't weaker than 0 & 1 but....

The core has dynamic SMT mode switching which is controlled by the
hypervisor (IBM's PHYP). There are 3 SMT modes:
SMT1 uses thread 0
SMT2 uses threads 0 & 1
SMT4 uses threads 0, 1, 2 & 3
When in any particular SMT mode, all threads have the same performance
as each other (ie. at any moment in time, all threads perform the same).

The SMT mode switching works such that when linux has threads 2 & 3 idle
and 0 & 1 active, it will cede (H_CEDE hypercall) threads 2 and 3 in the
idle loop and the hypervisor will automatically switch to SMT2 for that
core (independent of other cores). The opposite is not true, so if
threads 0 & 1 are idle and 2 & 3 are active, we will stay in SMT4 mode.

Similarly if thread 0 is active and threads 1, 2 & 3 are idle, we'll go
into SMT1 mode.

If we can get the core into a lower SMT mode (SMT1 is best), the threads
will perform better (since they share less core resources). Hence when
we have idle threads, we want them to be the higher ones.

Just out of curiosity, is this a hardware constraint or a hypervisor
constraint?
hardware
So to answer your question, threads 2 and 3 aren't weaker than the other
threads when in SMT4 mode. It's that if we idle threads 2 & 3, threads
0 & 1 will speed up since we'll move to SMT2 mode.

I'm pretty vague on linux scheduler details, so I'm a bit at sea as to
how to solve this. Can you suggest any mechanisms we currently have in
the kernel to reflect these properties, or do you think we need to
develop something new? If so, any pointers as to where we should look?

Since the threads speed up we'd need to change their weights at runtime regardless of placement. It just seems to make sense to let the changed weights affect placement naturally at the same time.

Well there currently isn't one, and I've been telling people to create a
new SD_flag to reflect this and influence the f_b_g() behaviour.

Something like the below perhaps, totally untested and without comments
so that you'll have to reverse engineer and validate my thinking.

There's one fundamental assumption, and one weakness in the
implementation.
I'm going to guess the weakness is that it doesn't adjust the cpu power so tasks running in SMT1 mode actually get more than they account for? What's the assumption?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/