[LKP] [mm] 4d942466994: +4.8% will-it-scale.per_process_ops

From: Huang Ying
Date: Wed Feb 25 2015 - 22:27:31 EST


FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 4d9424669946532be754a6e116618dcb58430cb4 ("mm: convert p[te|md]_mknonnuma and remaining page table manipulations")


testbox/testcase/testparams: lkp-g5/will-it-scale/performance-readseek3

842915f56667f9ee 4d9424669946532be754a6e116
---------------- --------------------------
%stddev %change %stddev
\ | \
1526912 Â 1% +4.8% 1599490 Â 0% will-it-scale.per_process_ops
15364 Â 4% +10.1% 16920 Â 5% will-it-scale.time.involuntary_context_switches
131290 Â 0% +4.8% 137653 Â 0% will-it-scale.time.minor_page_faults
318 Â 1% -1.6% 313 Â 0% will-it-scale.time.elapsed_time.max
318 Â 1% -1.6% 313 Â 0% will-it-scale.time.elapsed_time
54 Â 25% -83.8% 8 Â 48% sched_debug.cfs_rq[53]:/.tg_load_contrib
11 Â 39% +316.9% 47 Â 22% sched_debug.cfs_rq[18]:/.tg_load_contrib
8 Â 24% -79.0% 1 Â 47% sched_debug.cfs_rq[11]:/.nr_spread_over
46 Â 7% +271.8% 172 Â 45% numa-vmstat.node2.nr_inactive_anon
186 Â 7% +270.8% 691 Â 44% numa-meminfo.node2.Inactive(anon)
642 Â 15% +116.3% 1388 Â 41% sched_debug.cpu#2.ttwu_count
37 Â 17% +170.3% 100 Â 15% numa-vmstat.node6.nr_page_table_pages
19 Â 41% +98.7% 37 Â 38% sched_debug.cfs_rq[62]:/.blocked_load_avg
347 Â 19% +103.9% 708 Â 49% sched_debug.cpu#6.sched_goidle
215 Â 5% +148.8% 535 Â 31% sched_debug.cpu#101.sched_goidle
519 Â 7% +135.5% 1222 Â 30% sched_debug.cpu#101.nr_switches
23 Â 35% +84.3% 43 Â 33% sched_debug.cfs_rq[62]:/.tg_load_contrib
833 Â 18% +89.1% 1576 Â 47% sched_debug.cpu#6.nr_switches
1083043 Â 0% -55.2% 485505 Â 1% proc-vmstat.numa_pte_updates
250 Â 28% +120.1% 551 Â 41% sched_debug.cpu#74.ttwu_local
865 Â 18% +61.1% 1394 Â 29% sched_debug.cpu#32.ttwu_local
515 Â 18% +81.0% 932 Â 41% sched_debug.cpu#2.sched_goidle
1242 Â 22% +76.4% 2191 Â 39% sched_debug.cpu#2.nr_switches
23 Â 17% +63.9% 38 Â 19% sched_debug.cfs_rq[96]:/.tg_load_contrib
302 Â 0% +88.9% 570 Â 31% sched_debug.cpu#47.ttwu_local
3126 Â 6% +77.9% 5562 Â 36% sched_debug.cpu#15.sched_count
1717 Â 30% +30.5% 2240 Â 21% sched_debug.cpu#32.ttwu_count
256 Â 14% +87.1% 479 Â 22% sched_debug.cpu#7.ttwu_count
4200 Â 30% -55.3% 1878 Â 30% sched_debug.cpu#67.ttwu_count
1036 Â 10% -50.7% 511 Â 35% sched_debug.cpu#79.ttwu_count
527 Â 18% +72.5% 909 Â 34% sched_debug.cpu#15.ttwu_local
359 Â 7% +64.1% 589 Â 17% sched_debug.cpu#50.ttwu_count
1692 Â 16% +34.2% 2271 Â 21% sched_debug.cpu#32.sched_goidle
653 Â 14% -51.7% 315 Â 37% sched_debug.cpu#79.ttwu_local
137 Â 3% +99.1% 272 Â 38% sched_debug.cpu#40.ttwu_local
6398 Â 24% +55.9% 9973 Â 16% numa-meminfo.node3.SReclaimable
1599 Â 24% +55.9% 2493 Â 16% numa-vmstat.node3.nr_slab_reclaimable
3669 Â 18% +33.6% 4900 Â 16% sched_debug.cpu#32.nr_switches
3681 Â 18% +33.4% 4910 Â 16% sched_debug.cpu#32.sched_count
546 Â 9% +73.0% 945 Â 24% sched_debug.cpu#40.nr_switches
965 Â 26% -38.8% 590 Â 4% sched_debug.cpu#64.sched_goidle
569 Â 8% +69.2% 963 Â 23% sched_debug.cpu#40.sched_count
225 Â 7% +79.0% 402 Â 25% sched_debug.cpu#40.sched_goidle
199 Â 4% +53.6% 306 Â 17% sched_debug.cpu#103.sched_goidle
112 Â 3% +44.9% 162 Â 20% sched_debug.cpu#102.ttwu_local
198 Â 15% +71.8% 340 Â 39% sched_debug.cpu#40.ttwu_count
475 Â 2% +51.8% 721 Â 17% sched_debug.cpu#103.nr_switches
162 Â 9% +86.6% 302 Â 25% sched_debug.cpu#7.ttwu_local
203 Â 1% +54.7% 314 Â 15% sched_debug.cpu#102.sched_goidle
499 Â 7% +68.9% 843 Â 35% sched_debug.cpu#47.ttwu_count
478 Â 2% +54.4% 739 Â 16% sched_debug.cpu#102.nr_switches
767 Â 16% +66.6% 1278 Â 25% sched_debug.cpu#7.nr_switches
4369 Â 15% -41.3% 2564 Â 4% sched_debug.cpu#65.ttwu_count
169 Â 14% +32.2% 224 Â 21% sched_debug.cpu#105.ttwu_local
559 Â 6% +40.3% 784 Â 17% sched_debug.cpu#47.sched_goidle
770 Â 7% -28.2% 552 Â 21% sched_debug.cpu#126.ttwu_local
1195 Â 21% +35.0% 1614 Â 16% cpuidle.C1E-NHM.usage
1450 Â 17% +38.8% 2013 Â 8% sched_debug.cpu#90.ttwu_local
929 Â 7% +41.3% 1312 Â 9% sched_debug.cpu#97.ttwu_count
506 Â 6% +27.6% 646 Â 12% sched_debug.cpu#48.sched_goidle
914 Â 8% +54.0% 1407 Â 26% sched_debug.cpu#50.nr_switches
401 Â 6% +52.9% 613 Â 25% sched_debug.cpu#50.sched_goidle
1004 Â 9% +18.8% 1193 Â 12% slabinfo.xfs_buf.num_objs
1004 Â 9% +18.8% 1193 Â 12% slabinfo.xfs_buf.active_objs
1891 Â 3% +26.8% 2397 Â 9% sched_debug.cpu#120.nr_switches
1380 Â 6% -26.6% 1012 Â 10% sched_debug.cpu#126.ttwu_count
15695 Â 13% -21.4% 12338 Â 6% sched_debug.cfs_rq[113]:/.exec_clock
1310 Â 5% +40.7% 1844 Â 19% sched_debug.cpu#47.nr_switches
1330 Â 5% +40.0% 1863 Â 19% sched_debug.cpu#47.sched_count
1286 Â 8% +44.7% 1861 Â 35% sched_debug.cpu#15.ttwu_count
21 Â 10% +17.7% 25 Â 12% sched_debug.cpu#96.cpu_load[3]
23482 Â 12% +29.4% 30397 Â 4% numa-meminfo.node3.Slab
566 Â 21% -30.7% 392 Â 8% sched_debug.cpu#115.sched_goidle
22 Â 11% +17.5% 26 Â 13% sched_debug.cpu#96.cpu_load[2]
9 Â 5% +20.5% 11 Â 11% sched_debug.cpu#8.cpu_load[1]
9 Â 5% +20.5% 11 Â 11% sched_debug.cfs_rq[8]:/.load
9 Â 5% +20.5% 11 Â 11% sched_debug.cpu#5.load
9 Â 5% +20.5% 11 Â 11% sched_debug.cpu#8.load
3581 Â 5% +22.1% 4372 Â 6% sched_debug.cpu#26.curr->pid
215 Â 6% +37.9% 297 Â 14% sched_debug.cpu#50.ttwu_local
1331 Â 20% -31.6% 911 Â 9% sched_debug.cpu#115.nr_switches
824 Â 3% +21.2% 999 Â 14% sched_debug.cpu#120.sched_goidle
1339 Â 20% -31.3% 921 Â 9% sched_debug.cpu#115.sched_count
21 Â 10% +17.9% 24 Â 12% sched_debug.cpu#96.cpu_load[4]
42610 Â 11% -19.9% 34115 Â 4% sched_debug.cfs_rq[17]:/.exec_clock
2693 Â 7% +27.2% 3425 Â 10% sched_debug.cpu#95.curr->pid
2899 Â 18% +26.8% 3677 Â 11% sched_debug.cpu#90.ttwu_count
9 Â 17% -35.3% 6 Â 13% sched_debug.cpu#20.load
8 Â 5% -19.2% 7 Â 10% sched_debug.cfs_rq[45]:/.load
9 Â 17% -35.3% 6 Â 13% sched_debug.cfs_rq[20]:/.load
8 Â 5% -19.2% 7 Â 10% sched_debug.cpu#45.load
5131 Â 11% -20.9% 4061 Â 7% sched_debug.cpu#59.curr->pid
999035 Â 0% -21.4% 785703 Â 1% sched_debug.cpu#103.avg_idle
1000 Â 14% -33.2% 668 Â 32% numa-meminfo.node7.PageTables
1146 Â 11% -20.9% 907 Â 11% sched_debug.cpu#117.nr_switches
1157 Â 11% -20.9% 916 Â 10% sched_debug.cpu#117.sched_count
102779 Â 4% +24.6% 128062 Â 8% numa-numastat.node0.numa_hit
506 Â 11% -20.4% 403 Â 11% sched_debug.cpu#117.sched_goidle
1819 Â 10% +12.1% 2039 Â 9% sched_debug.cpu#94.ttwu_local
222 Â 21% +35.5% 300 Â 30% sched_debug.cpu#49.ttwu_local
1000000 Â 0% -20.2% 797734 Â 2% sched_debug.cpu#102.avg_idle
464 Â 5% +22.0% 567 Â 19% sched_debug.cpu#123.ttwu_local
892 Â 17% -24.9% 669 Â 14% sched_debug.cpu#59.sched_goidle
863 Â 2% +16.5% 1005 Â 6% sched_debug.cpu#125.ttwu_count
855 Â 6% +20.6% 1031 Â 15% sched_debug.cpu#124.ttwu_count
790 Â 8% +20.5% 952 Â 6% slabinfo.blkdev_ioc.active_objs
790 Â 8% +20.5% 952 Â 6% slabinfo.blkdev_ioc.num_objs
1000000 Â 0% -20.5% 795037 Â 1% sched_debug.cpu#101.avg_idle
521 Â 25% -32.9% 349 Â 6% sched_debug.cpu#116.ttwu_count
1000000 Â 0% -18.9% 811355 Â 2% sched_debug.cpu#37.avg_idle
99927 Â 5% +25.0% 124916 Â 7% numa-numastat.node0.local_node
998080 Â 1% -18.7% 811378 Â 3% sched_debug.cpu#40.avg_idle
9 Â 5% +15.2% 10 Â 7% sched_debug.cpu#5.cpu_load[0]
9 Â 5% +15.2% 10 Â 7% sched_debug.cpu#5.cpu_load[3]
998768 Â 0% -18.6% 813469 Â 2% sched_debug.cpu#39.avg_idle
513 Â 5% -25.7% 381 Â 16% sched_debug.cpu#115.ttwu_count
4271 Â 8% +19.5% 5105 Â 8% numa-vmstat.node3.nr_slab_unreclaimable
17083 Â 8% +19.6% 20423 Â 8% numa-meminfo.node3.SUnreclaim
991551 Â 1% -17.3% 819931 Â 3% sched_debug.cpu#38.avg_idle
4521 Â 7% -18.5% 3684 Â 10% sched_debug.cpu#106.curr->pid
998706 Â 0% -17.7% 821652 Â 3% sched_debug.cpu#36.avg_idle
996941 Â 0% -16.8% 829747 Â 2% sched_debug.cpu#33.avg_idle
339 Â 10% -22.4% 263 Â 13% sched_debug.cpu#115.ttwu_local
999527 Â 0% -18.7% 812806 Â 4% sched_debug.cpu#99.avg_idle
996295 Â 0% -16.5% 831915 Â 2% sched_debug.cpu#35.avg_idle
997889 Â 0% -19.6% 802292 Â 3% sched_debug.cpu#100.avg_idle
3462 Â 2% -20.4% 2757 Â 11% sched_debug.cpu#127.curr->pid
3207 Â 10% +20.2% 3855 Â 4% sched_debug.cpu#74.curr->pid
470490 Â 4% -10.6% 420446 Â 5% numa-numastat.node7.local_node
21800 Â 49% -40.6% 12939 Â 3% numa-meminfo.node4.Active
474714 Â 4% -10.6% 424629 Â 5% numa-numastat.node7.numa_hit
998931 Â 0% -16.7% 832405 Â 2% sched_debug.cpu#34.avg_idle
1000000 Â 0% -15.2% 848462 Â 0% sched_debug.cpu#96.avg_idle
177 Â 9% +19.4% 211 Â 5% sched_debug.cfs_rq[95]:/.tg_runnable_contrib
1018480 Â 2% -15.3% 862826 Â 1% sched_debug.cpu#41.avg_idle
10768 Â 5% -20.5% 8559 Â 8% sched_debug.cfs_rq[127]:/.avg->runnable_avg_sum
234 Â 6% -20.7% 185 Â 8% sched_debug.cfs_rq[127]:/.tg_runnable_contrib
8154 Â 9% +18.9% 9698 Â 5% sched_debug.cfs_rq[95]:/.avg->runnable_avg_sum
999017 Â 0% -13.5% 864522 Â 1% sched_debug.cpu#45.avg_idle
999665 Â 0% -14.0% 859582 Â 1% sched_debug.cpu#44.avg_idle
996108 Â 0% -15.6% 840514 Â 4% sched_debug.cpu#98.avg_idle
994758 Â 0% -15.1% 844877 Â 3% sched_debug.cpu#48.avg_idle
467 Â 3% -15.1% 397 Â 9% sched_debug.cpu#116.sched_goidle
997346 Â 0% -13.7% 861125 Â 1% sched_debug.cpu#42.avg_idle
1000000 Â 0% -14.2% 857602 Â 2% sched_debug.cpu#47.avg_idle
1000000 Â 0% -13.8% 862083 Â 1% sched_debug.cpu#46.avg_idle
1000000 Â 0% -13.5% 864694 Â 1% sched_debug.cpu#43.avg_idle
2923 Â 0% +17.9% 3447 Â 4% sched_debug.cpu#82.curr->pid
839 Â 0% +19.0% 998 Â 17% sched_debug.cpu#124.sched_goidle
169 Â 5% +10.2% 186 Â 7% sched_debug.cfs_rq[90]:/.tg_runnable_contrib
3870 Â 14% +17.3% 4538 Â 5% sched_debug.cpu#27.curr->pid
7791 Â 5% +10.0% 8568 Â 7% sched_debug.cfs_rq[90]:/.avg->runnable_avg_sum
232 Â 5% +14.1% 265 Â 4% sched_debug.cfs_rq[74]:/.tg_runnable_contrib
2530 Â 3% +13.7% 2877 Â 6% numa-vmstat.node5.nr_active_file
10124 Â 3% +13.7% 11510 Â 6% numa-meminfo.node5.Active(file)
3971 Â 5% -17.6% 3271 Â 16% sched_debug.cpu#114.curr->pid
10699 Â 5% +14.2% 12218 Â 4% sched_debug.cfs_rq[74]:/.avg->runnable_avg_sum
998715 Â 0% -12.4% 874974 Â 1% sched_debug.cpu#53.avg_idle
998753 Â 0% -12.5% 873481 Â 0% sched_debug.cpu#55.avg_idle
995163 Â 0% -11.8% 878062 Â 1% sched_debug.cpu#50.avg_idle
997692 Â 0% -12.6% 872339 Â 1% sched_debug.cpu#56.avg_idle
1072 Â 4% -15.6% 904 Â 10% sched_debug.cpu#116.nr_switches
996316 Â 0% -12.0% 876849 Â 1% sched_debug.cpu#52.avg_idle
997489 Â 0% -11.9% 878444 Â 1% sched_debug.cpu#54.avg_idle
9 Â 5% -14.3% 8 Â 0% sched_debug.cpu#9.cpu_load[3]
9 Â 5% -14.3% 8 Â 0% sched_debug.cpu#9.cpu_load[2]
996313 Â 0% -12.1% 875903 Â 2% sched_debug.cpu#49.avg_idle
15364 Â 4% +10.1% 16920 Â 5% time.involuntary_context_switches
4429 Â 7% -16.8% 3685 Â 10% sched_debug.cpu#107.curr->pid
997203 Â 0% -11.2% 885706 Â 1% sched_debug.cpu#64.avg_idle
737 Â 3% +16.7% 860 Â 5% time.voluntary_context_switches
283 Â 9% +15.9% 328 Â 3% sched_debug.cfs_rq[26]:/.tg_runnable_contrib
2554 Â 1% +7.6% 2748 Â 5% numa-vmstat.node3.nr_active_file
10220 Â 1% +7.6% 10994 Â 5% numa-meminfo.node3.Active(file)
13013 Â 9% +15.9% 15078 Â 3% sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
3896 Â 6% +6.6% 4155 Â 4% sched_debug.cpu#29.curr->pid
999677 Â 0% -10.9% 891066 Â 0% sched_debug.cpu#57.avg_idle
2054 Â 2% +11.6% 2292 Â 7% sched_debug.cpu#125.nr_switches
160853 Â 4% +14.8% 184682 Â 6% numa-numastat.node1.local_node
996830 Â 0% -11.8% 878821 Â 2% sched_debug.cpu#97.avg_idle
999553 Â 0% -10.4% 895133 Â 0% sched_debug.cpu#61.avg_idle
999693 Â 0% -10.9% 890366 Â 1% sched_debug.cpu#58.avg_idle
4686 Â 6% -12.8% 4085 Â 6% sched_debug.cpu#58.curr->pid
997785 Â 0% -10.9% 889088 Â 1% sched_debug.cpu#60.avg_idle
999123 Â 0% -10.5% 893883 Â 0% sched_debug.cpu#62.avg_idle
989119 Â 0% -10.7% 883255 Â 1% sched_debug.cpu#51.avg_idle
12980 Â 8% +7.5% 13948 Â 4% sched_debug.cfs_rq[29]:/.avg->runnable_avg_sum
165070 Â 4% +14.4% 188856 Â 6% numa-numastat.node1.numa_hit
999487 Â 0% -10.5% 894943 Â 0% sched_debug.cpu#63.avg_idle
10 Â 0% -15.0% 8 Â 5% sched_debug.cpu#9.cpu_load[1]
270 Â 8% -13.4% 234 Â 7% sched_debug.cpu#117.ttwu_local
999055 Â 0% -9.4% 904983 Â 0% sched_debug.cpu#109.avg_idle
1000000 Â 0% -9.2% 907540 Â 0% sched_debug.cpu#104.avg_idle
1000000 Â 0% -9.8% 902218 Â 0% sched_debug.cpu#111.avg_idle

lkp-g5: Westmere-EX
Memory: 2048G

To reproduce:

apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

---
testcase: will-it-scale
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: 3040b24561684da6d1722389ee3a63ce817e471a
model: G5
nr_cpu: 128
memory: 2048G
rootfs_partition:
perf-profile:
freq: 800
will-it-scale:
test: readseek3
testbox: lkp-g5
tbox_group: lkp-g5
kconfig: x86_64-rhel
enqueue_time: 2015-02-18 19:46:15.349624490 +08:00
head_commit: 3040b24561684da6d1722389ee3a63ce817e471a
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: next/master
kernel: "/kernel/x86_64-rhel/3040b24561684da6d1722389ee3a63ce817e471a/vmlinuz-3.19.0-next-20150219"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/lkp-g5/will-it-scale/performance-readseek3/debian-x86_64-2015-02-07.cgz/x86_64-rhel/3040b24561684da6d1722389ee3a63ce817e471a/0"
job_file: "/lkp/scheduled/lkp-g5/cyclic_will-it-scale-performance-readseek3-x86_64-rhel-HEAD-3040b24561684da6d1722389ee3a63ce817e471a-0-20150218-23600-1g6f24d.yaml"
dequeue_time: 2015-02-20 06:40:51.497470720 +08:00
job_state: finished
loadavg: 61.73 35.35 15.25 1/1022 17836
start_time: '1424386077'
end_time: '1424386392'
version: "/lkp/lkp/.src-20150219-182016"
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu100/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu101/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu102/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu103/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu104/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu105/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu106/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu107/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu108/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu109/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu110/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu111/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu112/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu113/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu114/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu115/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu116/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu117/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu118/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu119/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu120/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu121/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu122/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu123/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu124/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu125/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu126/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu127/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu48/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu49/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu50/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu51/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu52/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu53/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu54/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu55/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu56/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu57/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu58/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu59/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu60/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu61/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu62/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu63/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu64/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu65/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu66/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu67/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu68/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu69/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu70/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu71/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu72/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu73/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu74/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu75/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu76/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu77/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu78/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu79/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu80/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu81/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu82/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu83/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu84/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu85/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu86/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu87/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu88/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu89/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu90/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu91/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu92/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu93/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu94/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu95/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu96/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu97/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu98/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu99/cpufreq/scaling_governor
./runtest.py readseek3 8 both 1 8 16 24 32 40 48 56 64 96 128
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx