Re: [RFC][PATCH] Remove cgroup member from struct page

From: KAMEZAWA Hiroyuki
Date: Mon Sep 01 2008 - 01:20:34 EST


On Mon, 1 Sep 2008 13:03:51 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> > That depends, if we can get the lockless page cgroup done quickly, I don't mind
> > waiting, but if it is going to take longer, I would rather push these changes
> > in.
> The development of lockless-page_cgroup is not stalled. I'm just waiting for
> my 8cpu box comes back from maintainance...
> If you want to see, I'll post v3 with brief result on small (2cpu) box.
>
This is current status (result of unixbench.)
result of 2core/1socket x86-64 system.

==
[disabled]
Execl Throughput 3103.3 lps (29.7 secs, 3 samples)
C Compiler Throughput 1052.0 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5915.0 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1142.7 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 586.0 lpm (60.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 131463.3 lpm (30.0 secs, 3 samples)

[rc4mm1]
Execl Throughput 3004.4 lps (29.6 secs, 3 samples)
C Compiler Throughput 1017.9 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5726.3 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1124.3 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 576.0 lpm (60.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 125446.5 lpm (30.0 secs, 3 samples)

[lockless]
Execl Throughput 3041.0 lps (29.8 secs, 3 samples)
C Compiler Throughput 1025.7 lpm (60.0 secs, 3 samples)
Shell Scripts (1 concurrent) 5713.6 lpm (60.0 secs, 3 samples)
Shell Scripts (8 concurrent) 1113.7 lpm (60.0 secs, 3 samples)
Shell Scripts (16 concurrent) 571.3 lpm (60.0 secs, 3 samples)
Dc: sqrt(2) to 99 decimal places 125417.9 lpm (30.0 secs, 3 samples)
==

>From this, single-thread results are good. multi-process results are not good ;)
So, I think the number of atomic ops are reduced but I have should-be-fixed
contention or cache-bouncing problem yet. I'd like to fix this and check on 8 core
system when it is back.
Recently, I wonder within-3%-overhead is realistic goal.

Thanks,
-Kame




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/