Re: [lkp-robot] [fs] 5c6de586e8: vm-scalability.throughput +12.4% improvement (from reorganizing struct inode?)

From: Amir Goldstein
Date: Thu Jul 05 2018 - 02:01:12 EST


On Mon, Jul 2, 2018 at 9:27 AM, Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> Linus,
>
> This may be a test fluctuation or as a result of moving
> i_blkbits closer to i_bytes and i_lock.
>
> In any case, ping for:
> https://marc.info/?l=linux-fsdevel&m=152882624707975&w=2
>

Linus,

Per your request, I will re-post the origin patch with a link to this
discussion (which has now been made public).

Thanks,
Amir.

>
> ---------- Forwarded message ----------
> From: kernel test robot <xiaolong.ye@xxxxxxxxx>
> Date: Mon, Jul 2, 2018 at 8:14 AM
> Subject: [lkp-robot] [fs] 5c6de586e8: vm-scalability.throughput
> +12.4% improvement
> To: Amir Goldstein <amir73il@xxxxxxxxx>
> Cc: lkp@xxxxxx
>
>
>
> Greeting,
>
> FYI, we noticed a +12.4% improvement of vm-scalability.throughput due to commit:
>
>
> commit: 5c6de586e899a4a80a0ffa26468639f43dee1009 ("[PATCH] fs: shave 8
> bytes off of struct inode")
> url: https://github.com/0day-ci/linux/commits/Amir-Goldstein/fs-shave-8-bytes-off-of-struct-inode/20180612-192311
>
>
> in testcase: vm-scalability
> on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
> with 256G memory
> with following parameters:
>
> runtime: 300s
> test: small-allocs
> cpufreq_governor: performance
>
> test-description: The motivation behind this suite is to exercise
> functions and regions of the mm/ of the Linux kernel which are of
> interest to us.
> test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
>
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> To reproduce:
>
> git clone https://github.com/intel/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml # job file is attached in this email
> bin/lkp run job.yaml
>
> =========================================================================================
> compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
> gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep5/small-allocs/vm-scalability
>
> commit:
> 8efcf34a26 (" ARM: SoC: late updates")
> 5c6de586e8 ("fs: shave 8 bytes off of struct inode")
>
> 8efcf34a263965e4 5c6de586e899a4a80a0ffa2646
> ---------------- --------------------------
> %stddev %change %stddev
> \ | \
> 19335952 +12.4% 21729332 vm-scalability.throughput
> 693688 +11.9% 775935 vm-scalability.median
> 0.56 Ä 55% -43.6% 0.32 Ä 62% vm-scalability.stddev
> 288.16 -7.0% 267.96 vm-scalability.time.elapsed_time
> 288.16 -7.0% 267.96
> vm-scalability.time.elapsed_time.max
> 48921 Ä 6% -3.3% 47314 Ä 5%
> vm-scalability.time.involuntary_context_switches
> 3777 -4.4% 3610
> vm-scalability.time.maximum_resident_set_size
> 1.074e+09 +0.0% 1.074e+09
> vm-scalability.time.minor_page_faults
> 4096 +0.0% 4096 vm-scalability.time.page_size
> 2672 +0.6% 2689
> vm-scalability.time.percent_of_cpu_this_job_got
> 5457 -9.4% 4942 vm-scalability.time.system_time
> 2244 +0.9% 2263 vm-scalability.time.user_time
> 5529533 Ä 6% -65.2% 1923014 Ä 6%
> vm-scalability.time.voluntary_context_switches
> 4.832e+09 +0.0% 4.832e+09 vm-scalability.workload
> 93827 Ä 3% -7.0% 87299 Ä 3%
> interrupts.CAL:Function_call_interrupts
> 26.50 -2.0% 25.98 Ä 3% boot-time.boot
> 16.69 -3.2% 16.16 Ä 5% boot-time.dhcp
> 674.18 -1.1% 666.43 Ä 2% boot-time.idle
> 17.61 -3.5% 17.00 Ä 5% boot-time.kernel_boot
> 15034 Ä 62% -30.0% 10528 Ä 79% softirqs.NET_RX
> 453251 Ä 9% -2.9% 440306 Ä 14% softirqs.RCU
> 46795 -14.9% 39806 Ä 2% softirqs.SCHED
> 3565160 Ä 8% +6.1% 3784023 softirqs.TIMER
> 4.87 Ä 4% -0.6 4.25 Ä 3% mpstat.cpu.idle%
> 0.00 Ä 13% -0.0 0.00 Ä 14% mpstat.cpu.iowait%
> 0.00 Ä 37% +0.0 0.00 Ä 37% mpstat.cpu.soft%
> 67.35 -1.8 65.60 mpstat.cpu.sys%
> 27.78 +2.4 30.14 mpstat.cpu.usr%
> 1038 -0.5% 1033 vmstat.memory.buff
> 1117006 -0.3% 1113807 vmstat.memory.cache
> 2.463e+08 -0.2% 2.457e+08 vmstat.memory.free
> 26.00 +1.9% 26.50 vmstat.procs.r
> 42239 Ä 6% -55.9% 18619 Ä 5% vmstat.system.cs
> 31359 -0.7% 31132 vmstat.system.in
> 0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
> 2713006 -1.0% 2685108 numa-numastat.node0.local_node
> 2716657 -1.0% 2689682 numa-numastat.node0.numa_hit
> 3651 Ä 36% +25.3% 4576 Ä 34% numa-numastat.node0.other_node
> 0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
> 2713025 -0.5% 2699801 numa-numastat.node1.local_node
> 2714924 -0.5% 2700769 numa-numastat.node1.numa_hit
> 1900 Ä 68% -49.1% 968.00 Ä162% numa-numastat.node1.other_node
> 21859882 Ä 6% -56.1% 9599175 Ä 2% cpuidle.C1.time
> 5231991 Ä 6% -65.3% 1814200 Ä 8% cpuidle.C1.usage
> 620147 Ä 9% -22.4% 481528 Ä 10% cpuidle.C1E.time
> 7829 Ä 6% -34.5% 5126 Ä 17% cpuidle.C1E.usage
> 5343219 Ä 5% -58.8% 2202020 Ä 4% cpuidle.C3.time
> 22942 Ä 5% -54.9% 10349 Ä 4% cpuidle.C3.usage
> 3.345e+08 Ä 3% -15.1% 2.839e+08 Ä 3% cpuidle.C6.time
> 355754 Ä 3% -16.3% 297683 Ä 3% cpuidle.C6.usage
> 248800 Ä 6% -74.5% 63413 Ä 5% cpuidle.POLL.time
> 90897 Ä 6% -76.4% 21409 Ä 7% cpuidle.POLL.usage
> 2631 +0.5% 2644 turbostat.Avg_MHz
> 95.35 +0.5 95.88 turbostat.Busy%
> 2759 -0.1% 2757 turbostat.Bzy_MHz
> 5227940 Ä 6% -65.4% 1809983 Ä 8% turbostat.C1
> 0.27 Ä 5% -0.1 0.13 Ä 3% turbostat.C1%
> 7646 Ä 5% -36.0% 4894 Ä 17% turbostat.C1E
> 0.01 +0.0 0.01 turbostat.C1E%
> 22705 Ä 5% -56.0% 9995 Ä 3% turbostat.C3
> 0.07 Ä 7% -0.0 0.03 turbostat.C3%
> 354625 Ä 3% -16.3% 296732 Ä 3% turbostat.C6
> 4.11 Ä 3% -0.4 3.75 Ä 2% turbostat.C6%
> 1.68 Ä 3% -15.6% 1.42 Ä 2% turbostat.CPU%c1
> 0.04 -62.5% 0.01 Ä 33% turbostat.CPU%c3
> 2.93 Ä 3% -8.1% 2.69 Ä 2% turbostat.CPU%c6
> 64.50 -1.6% 63.50 Ä 2% turbostat.CoreTmp
> 9095020 -7.6% 8400668 turbostat.IRQ
> 11.78 Ä 5% -2.3 9.45 Ä 6% turbostat.PKG_%
> 0.10 Ä 27% -4.9% 0.10 Ä 22% turbostat.Pkg%pc2
> 0.00 Ä173% -100.0% 0.00 turbostat.Pkg%pc6
> 68.50 Ä 2% +0.0% 68.50 turbostat.PkgTmp
> 230.97 +0.1% 231.10 turbostat.PkgWatt
> 22.45 -0.8% 22.27 turbostat.RAMWatt
> 10304 -8.6% 9415 Ä 2% turbostat.SMI
> 2300 +0.0% 2300 turbostat.TSC_MHz
> 269189 -0.6% 267505 meminfo.Active
> 269118 -0.6% 267429 meminfo.Active(anon)
> 167369 -1.7% 164478 meminfo.AnonHugePages
> 245606 +0.2% 246183 meminfo.AnonPages
> 1042 -0.2% 1039 meminfo.Buffers
> 1066883 -0.2% 1064908 meminfo.Cached
> 203421 -0.0% 203421 meminfo.CmaFree
> 204800 +0.0% 204800 meminfo.CmaTotal
> 1.32e+08 -0.0% 1.32e+08 meminfo.CommitLimit
> 485388 Ä 15% -5.3% 459546 Ä 9% meminfo.Committed_AS
> 2.65e+08 +0.0% 2.65e+08 meminfo.DirectMap1G
> 5240434 Ä 8% +0.3% 5257281 Ä 16% meminfo.DirectMap2M
> 169088 Ä 6% -10.0% 152241 Ä 5% meminfo.DirectMap4k
> 2048 +0.0% 2048 meminfo.Hugepagesize
> 150295 +0.1% 150389 meminfo.Inactive
> 149164 +0.1% 149264 meminfo.Inactive(anon)
> 1130 -0.6% 1124 meminfo.Inactive(file)
> 7375 -0.4% 7345 meminfo.KernelStack
> 28041 -0.5% 27895 meminfo.Mapped
> 2.451e+08 -0.2% 2.445e+08 meminfo.MemAvailable
> 2.462e+08 -0.2% 2.457e+08 meminfo.MemFree
> 2.64e+08 -0.0% 2.64e+08 meminfo.MemTotal
> 1179 Ä 57% -35.2% 764.50 Ä100% meminfo.Mlocked
> 4507108 +3.4% 4660163 meminfo.PageTables
> 50580 -1.9% 49633 meminfo.SReclaimable
> 11620141 +3.3% 12007592 meminfo.SUnreclaim
> 172885 -1.4% 170543 meminfo.Shmem
> 11670722 +3.3% 12057225 meminfo.Slab
> 894024 +0.0% 894327 meminfo.Unevictable
> 3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
> 6.873e+12 -2.1% 6.727e+12 perf-stat.branch-instructions
> 0.08 Ä 2% -0.0 0.08 perf-stat.branch-miss-rate%
> 5.269e+09 Ä 2% -3.7% 5.072e+09 perf-stat.branch-misses
> 37.34 +1.1 38.44 perf-stat.cache-miss-rate%
> 8.136e+09 Ä 2% -7.0% 7.568e+09 Ä 3% perf-stat.cache-misses
> 2.179e+10 Ä 2% -9.6% 1.969e+10 Ä 3% perf-stat.cache-references
> 12287807 Ä 6% -59.1% 5020001 Ä 5% perf-stat.context-switches
> 0.87 -3.5% 0.84 perf-stat.cpi
> 2.116e+13 -6.5% 1.978e+13 perf-stat.cpu-cycles
> 24160 Ä 5% -13.8% 20819 Ä 3% perf-stat.cpu-migrations
> 0.16 Ä 6% +0.0 0.16 Ä 4% perf-stat.dTLB-load-miss-rate%
> 1.057e+10 Ä 6% +0.4% 1.061e+10 Ä 5% perf-stat.dTLB-load-misses
> 6.792e+12 -3.5% 6.557e+12 perf-stat.dTLB-loads
> 0.00 Ä 9% +0.0 0.00 Ä 20% perf-stat.dTLB-store-miss-rate%
> 22950816 Ä 9% +6.2% 24373827 Ä 20% perf-stat.dTLB-store-misses
> 9.067e+11 -0.7% 9.005e+11 perf-stat.dTLB-stores
> 95.10 +2.7 97.81 perf-stat.iTLB-load-miss-rate%
> 2.437e+09 +6.8% 2.604e+09 Ä 4% perf-stat.iTLB-load-misses
> 1.257e+08 Ä 8% -53.7% 58211601 Ä 6% perf-stat.iTLB-loads
> 2.44e+13 -3.1% 2.364e+13 perf-stat.instructions
> 10011 -9.1% 9100 Ä 4%
> perf-stat.instructions-per-iTLB-miss
> 1.15 +3.6% 1.20 perf-stat.ipc
> 1.074e+09 -0.0% 1.074e+09 perf-stat.minor-faults
> 64.40 Ä 4% -4.2 60.22 Ä 5% perf-stat.node-load-miss-rate%
> 4.039e+09 Ä 2% -10.9% 3.599e+09 Ä 3% perf-stat.node-load-misses
> 2.241e+09 Ä 10% +6.7% 2.39e+09 Ä 12% perf-stat.node-loads
> 49.24 -1.7 47.53 perf-stat.node-store-miss-rate%
> 9.005e+08 -18.3% 7.357e+08 Ä 2% perf-stat.node-store-misses
> 9.282e+08 -12.5% 8.123e+08 Ä 2% perf-stat.node-stores
> 1.074e+09 -0.0% 1.074e+09 perf-stat.page-faults
> 5049 -3.1% 4893 perf-stat.path-length
> 67282 -0.6% 66860 proc-vmstat.nr_active_anon
> 61404 +0.2% 61545 proc-vmstat.nr_anon_pages
> 6117769 -0.2% 6104302
> proc-vmstat.nr_dirty_background_threshold
> 12250497 -0.2% 12223531 proc-vmstat.nr_dirty_threshold
> 266952 -0.2% 266464 proc-vmstat.nr_file_pages
> 50855 -0.0% 50855 proc-vmstat.nr_free_cma
> 61552505 -0.2% 61417644 proc-vmstat.nr_free_pages
> 37259 +0.1% 37286 proc-vmstat.nr_inactive_anon
> 282.25 -0.6% 280.50 proc-vmstat.nr_inactive_file
> 7375 -0.4% 7344 proc-vmstat.nr_kernel_stack
> 7124 -0.5% 7085 proc-vmstat.nr_mapped
> 295.00 Ä 57% -35.3% 190.75 Ä100% proc-vmstat.nr_mlock
> 1125531 +3.4% 1163908 proc-vmstat.nr_page_table_pages
> 43193 -1.3% 42613 proc-vmstat.nr_shmem
> 12644 -1.9% 12407 proc-vmstat.nr_slab_reclaimable
> 2901812 +3.3% 2998814 proc-vmstat.nr_slab_unreclaimable
> 223506 +0.0% 223581 proc-vmstat.nr_unevictable
> 67282 -0.6% 66860 proc-vmstat.nr_zone_active_anon
> 37259 +0.1% 37286 proc-vmstat.nr_zone_inactive_anon
> 282.25 -0.6% 280.50 proc-vmstat.nr_zone_inactive_file
> 223506 +0.0% 223581 proc-vmstat.nr_zone_unevictable
> 2685 Ä104% +108.2% 5591 Ä 84% proc-vmstat.numa_hint_faults
> 1757 Ä140% +77.9% 3125 Ä 87% proc-vmstat.numa_hint_faults_local
> 5456982 -0.7% 5417832 proc-vmstat.numa_hit
> 5451431 -0.7% 5412285 proc-vmstat.numa_local
> 5551 -0.1% 5547 proc-vmstat.numa_other
> 977.50 Ä 40% +193.7% 2871 Ä101% proc-vmstat.numa_pages_migrated
> 10519 Ä113% +140.4% 25286 Ä 97% proc-vmstat.numa_pte_updates
> 10726 Ä 8% -3.8% 10315 Ä 7% proc-vmstat.pgactivate
> 8191987 -0.5% 8150192 proc-vmstat.pgalloc_normal
> 1.074e+09 -0.0% 1.074e+09 proc-vmstat.pgfault
> 8143430 -2.7% 7926613 proc-vmstat.pgfree
> 977.50 Ä 40% +193.7% 2871 Ä101% proc-vmstat.pgmigrate_success
> 2155 -0.4% 2147 proc-vmstat.pgpgin
> 2049 -0.0% 2048 proc-vmstat.pgpgout
> 67375 -0.2% 67232
> slabinfo.Acpi-Namespace.active_objs
> 67375 -0.2% 67232 slabinfo.Acpi-Namespace.num_objs
> 604.00 Ä 19% -0.1% 603.50 Ä 9% slabinfo.Acpi-ParseExt.active_objs
> 604.00 Ä 19% -0.1% 603.50 Ä 9% slabinfo.Acpi-ParseExt.num_objs
> 7972 Ä 4% -0.3% 7949 Ä 2% slabinfo.anon_vma.active_objs
> 7972 Ä 4% -0.3% 7949 Ä 2% slabinfo.anon_vma.num_objs
> 1697 Ä 12% -10.0% 1528 Ä 7% slabinfo.avtab_node.active_objs
> 1697 Ä 12% -10.0% 1528 Ä 7% slabinfo.avtab_node.num_objs
> 58071 -0.1% 58034 slabinfo.dentry.active_objs
> 1351 Ä 25% -31.2% 930.50 Ä 30%
> slabinfo.dmaengine-unmap-16.active_objs
> 1351 Ä 25% -31.2% 930.50 Ä 30%
> slabinfo.dmaengine-unmap-16.num_objs
> 1354 Ä 6% -7.1% 1258 Ä 9% slabinfo.eventpoll_pwq.active_objs
> 1354 Ä 6% -7.1% 1258 Ä 9% slabinfo.eventpoll_pwq.num_objs
> 8880 Ä 5% +0.0% 8883 Ä 6% slabinfo.filp.num_objs
> 2715 Ä 7% -9.0% 2470 Ä 5% slabinfo.kmalloc-1024.active_objs
> 2831 Ä 7% -10.2% 2543 Ä 3% slabinfo.kmalloc-1024.num_objs
> 12480 -2.7% 12140 Ä 2% slabinfo.kmalloc-16.active_objs
> 12480 -2.7% 12140 Ä 2% slabinfo.kmalloc-16.num_objs
> 39520 Ä 3% +1.0% 39922 slabinfo.kmalloc-32.active_objs
> 37573 +0.1% 37616 slabinfo.kmalloc-64.active_objs
> 37590 +0.2% 37673 slabinfo.kmalloc-64.num_objs
> 17182 +2.1% 17550 Ä 2% slabinfo.kmalloc-8.active_objs
> 17663 +2.2% 18045 Ä 2% slabinfo.kmalloc-8.num_objs
> 4428 Ä 7% -2.9% 4298 Ä 4% slabinfo.kmalloc-96.active_objs
> 956.50 Ä 10% +4.5% 999.50 Ä 16% slabinfo.nsproxy.active_objs
> 956.50 Ä 10% +4.5% 999.50 Ä 16% slabinfo.nsproxy.num_objs
> 19094 Ä 3% -3.3% 18463 Ä 5% slabinfo.pid.active_objs
> 19094 Ä 3% -3.0% 18523 Ä 5% slabinfo.pid.num_objs
> 2088 Ä 14% -21.1% 1648 Ä 8%
> slabinfo.skbuff_head_cache.active_objs
> 2136 Ä 16% -19.5% 1720 Ä 7%
> slabinfo.skbuff_head_cache.num_objs
> 872.00 Ä 16% -9.3% 791.00 Ä 15% slabinfo.task_group.active_objs
> 872.00 Ä 16% -9.3% 791.00 Ä 15% slabinfo.task_group.num_objs
> 57677936 +3.4% 59655533
> slabinfo.vm_area_struct.active_objs
> 1442050 +3.4% 1491482
> slabinfo.vm_area_struct.active_slabs
> 57682039 +3.4% 59659325 slabinfo.vm_area_struct.num_objs
> 1442050 +3.4% 1491482 slabinfo.vm_area_struct.num_slabs
> 132527 Ä 15% +1.9% 135006 Ä 2% numa-meminfo.node0.Active
> 132475 Ä 15% +1.9% 134985 Ä 2% numa-meminfo.node0.Active(anon)
> 86736 Ä 32% +6.5% 92413 Ä 15% numa-meminfo.node0.AnonHugePages
> 124613 Ä 19% +3.4% 128788 Ä 4% numa-meminfo.node0.AnonPages
> 556225 Ä 11% -5.1% 527584 Ä 12% numa-meminfo.node0.FilePages
> 100973 Ä 57% -27.7% 73011 Ä 89% numa-meminfo.node0.Inactive
> 100128 Ä 57% -27.4% 72686 Ä 90% numa-meminfo.node0.Inactive(anon)
> 843.75 Ä 57% -61.5% 324.75 Ä115% numa-meminfo.node0.Inactive(file)
> 4058 Ä 3% +3.5% 4200 Ä 4% numa-meminfo.node0.KernelStack
> 12846 Ä 26% +1.6% 13053 Ä 26% numa-meminfo.node0.Mapped
> 1.228e+08 -0.1% 1.226e+08 numa-meminfo.node0.MemFree
> 1.32e+08 +0.0% 1.32e+08 numa-meminfo.node0.MemTotal
> 9191573 +1.7% 9347737 numa-meminfo.node0.MemUsed
> 2321138 +2.2% 2372650 numa-meminfo.node0.PageTables
> 24816 Ä 13% -5.8% 23377 Ä 16% numa-meminfo.node0.SReclaimable
> 5985628 +2.2% 6116406 numa-meminfo.node0.SUnreclaim
> 108046 Ä 57% -26.9% 78955 Ä 80% numa-meminfo.node0.Shmem
> 6010444 +2.2% 6139784 numa-meminfo.node0.Slab
> 447344 +0.2% 448404 numa-meminfo.node0.Unevictable
> 136675 Ä 13% -3.0% 132521 Ä 2% numa-meminfo.node1.Active
> 136655 Ä 13% -3.1% 132467 Ä 2% numa-meminfo.node1.Active(anon)
> 80581 Ä 35% -10.5% 72102 Ä 18% numa-meminfo.node1.AnonHugePages
> 120993 Ä 20% -3.0% 117407 Ä 4% numa-meminfo.node1.AnonPages
> 511901 Ä 12% +5.2% 538396 Ä 11% numa-meminfo.node1.FilePages
> 49526 Ä116% +56.3% 77402 Ä 84% numa-meminfo.node1.Inactive
> 49238 Ä115% +55.6% 76603 Ä 85% numa-meminfo.node1.Inactive(anon)
> 287.75 Ä168% +177.6% 798.75 Ä 47% numa-meminfo.node1.Inactive(file)
> 3316 Ä 3% -5.2% 3143 Ä 5% numa-meminfo.node1.KernelStack
> 15225 Ä 23% -2.0% 14918 Ä 22% numa-meminfo.node1.Mapped
> 1.234e+08 -0.3% 1.23e+08 numa-meminfo.node1.MemFree
> 1.321e+08 -0.0% 1.321e+08 numa-meminfo.node1.MemTotal
> 8664187 +4.4% 9041729 numa-meminfo.node1.MemUsed
> 2185511 +4.6% 2286356 numa-meminfo.node1.PageTables
> 25762 Ä 12% +1.9% 26255 Ä 15% numa-meminfo.node1.SReclaimable
> 5634774 +4.5% 5887602 numa-meminfo.node1.SUnreclaim
> 65037 Ä 92% +40.9% 91621 Ä 68% numa-meminfo.node1.Shmem
> 5660536 +4.5% 5913858 numa-meminfo.node1.Slab
> 446680 -0.2% 445922 numa-meminfo.node1.Unevictable
> 15553 Ä 18% -14.1% 13366 Ä 11% numa-vmstat.node0
> 33116 Ä 15% +1.9% 33742 Ä 2% numa-vmstat.node0.nr_active_anon
> 31157 Ä 19% +3.3% 32196 Ä 4% numa-vmstat.node0.nr_anon_pages
> 139001 Ä 11% -5.1% 131864 Ä 12% numa-vmstat.node0.nr_file_pages
> 30692447 -0.1% 30654328 numa-vmstat.node0.nr_free_pages
> 24983 Ä 57% -27.4% 18142 Ä 90% numa-vmstat.node0.nr_inactive_anon
> 210.25 Ä 57% -61.6% 80.75 Ä116% numa-vmstat.node0.nr_inactive_file
> 4058 Ä 3% +3.5% 4199 Ä 4% numa-vmstat.node0.nr_kernel_stack
> 3304 Ä 26% +1.2% 3344 Ä 25% numa-vmstat.node0.nr_mapped
> 139.00 Ä 60% -20.3% 110.75 Ä100% numa-vmstat.node0.nr_mlock
> 579931 +2.1% 592262
> numa-vmstat.node0.nr_page_table_pages
> 26956 Ä 57% -26.9% 19707 Ä 80% numa-vmstat.node0.nr_shmem
> 6203 Ä 13% -5.8% 5844 Ä 16%
> numa-vmstat.node0.nr_slab_reclaimable
> 1495541 +2.2% 1527781
> numa-vmstat.node0.nr_slab_unreclaimable
> 111835 +0.2% 112100 numa-vmstat.node0.nr_unevictable
> 33116 Ä 15% +1.9% 33742 Ä 2%
> numa-vmstat.node0.nr_zone_active_anon
> 24983 Ä 57% -27.4% 18142 Ä 90%
> numa-vmstat.node0.nr_zone_inactive_anon
> 210.25 Ä 57% -61.6% 80.75 Ä116%
> numa-vmstat.node0.nr_zone_inactive_file
> 111835 +0.2% 112100
> numa-vmstat.node0.nr_zone_unevictable
> 1840693 Ä 2% +1.6% 1869501 numa-vmstat.node0.numa_hit
> 144048 +0.2% 144385 numa-vmstat.node0.numa_interleave
> 1836656 Ä 2% +1.5% 1864579 numa-vmstat.node0.numa_local
> 4036 Ä 33% +21.9% 4921 Ä 31% numa-vmstat.node0.numa_other
> 11577 Ä 24% +17.8% 13635 Ä 11% numa-vmstat.node1
> 34171 Ä 13% -3.1% 33126 Ä 2% numa-vmstat.node1.nr_active_anon
> 30247 Ä 20% -3.0% 29352 Ä 4% numa-vmstat.node1.nr_anon_pages
> 127979 Ä 12% +5.2% 134601 Ä 11% numa-vmstat.node1.nr_file_pages
> 50855 -0.0% 50855 numa-vmstat.node1.nr_free_cma
> 30858027 -0.3% 30763794 numa-vmstat.node1.nr_free_pages
> 12305 Ä116% +55.6% 19145 Ä 85% numa-vmstat.node1.nr_inactive_anon
> 71.75 Ä168% +177.7% 199.25 Ä 47% numa-vmstat.node1.nr_inactive_file
> 3315 Ä 3% -5.2% 3144 Ä 5% numa-vmstat.node1.nr_kernel_stack
> 3823 Ä 23% -1.8% 3754 Ä 22% numa-vmstat.node1.nr_mapped
> 155.00 Ä 60% -48.2% 80.25 Ä100% numa-vmstat.node1.nr_mlock
> 545973 +4.6% 571069
> numa-vmstat.node1.nr_page_table_pages
> 16263 Ä 92% +40.9% 22907 Ä 68% numa-vmstat.node1.nr_shmem
> 6440 Ä 12% +1.9% 6563 Ä 15%
> numa-vmstat.node1.nr_slab_reclaimable
> 1407904 +4.5% 1471030
> numa-vmstat.node1.nr_slab_unreclaimable
> 111669 -0.2% 111480 numa-vmstat.node1.nr_unevictable
> 34171 Ä 13% -3.1% 33126 Ä 2%
> numa-vmstat.node1.nr_zone_active_anon
> 12305 Ä116% +55.6% 19145 Ä 85%
> numa-vmstat.node1.nr_zone_inactive_anon
> 71.75 Ä168% +177.7% 199.25 Ä 47%
> numa-vmstat.node1.nr_zone_inactive_file
> 111669 -0.2% 111480
> numa-vmstat.node1.nr_zone_unevictable
> 1846889 Ä 2% +1.4% 1872108 numa-vmstat.node1.numa_hit
> 144151 -0.2% 143830 numa-vmstat.node1.numa_interleave
> 1699975 Ä 2% +1.6% 1726462 numa-vmstat.node1.numa_local
> 146913 -0.9% 145645 numa-vmstat.node1.numa_other
> 0.00 +1.2e+12% 12083 Ä100%
> sched_debug.cfs_rq:/.MIN_vruntime.avg
> 0.00 +3.4e+13% 338347 Ä100%
> sched_debug.cfs_rq:/.MIN_vruntime.max
> 0.00 +0.0% 0.00
> sched_debug.cfs_rq:/.MIN_vruntime.min
> 0.00 +1.5e+28% 62789 Ä100%
> sched_debug.cfs_rq:/.MIN_vruntime.stddev
> 118226 +0.4% 118681
> sched_debug.cfs_rq:/.exec_clock.avg
> 119425 +0.3% 119724
> sched_debug.cfs_rq:/.exec_clock.max
> 117183 +0.1% 117244
> sched_debug.cfs_rq:/.exec_clock.min
> 395.73 Ä 14% +11.0% 439.18 Ä 23%
> sched_debug.cfs_rq:/.exec_clock.stddev
> 32398 +14.1% 36980 Ä 9% sched_debug.cfs_rq:/.load.avg
> 73141 Ä 5% +128.0% 166780 Ä 57% sched_debug.cfs_rq:/.load.max
> 17867 Ä 19% +30.4% 23301 Ä 13% sched_debug.cfs_rq:/.load.min
> 11142 Ä 3% +146.4% 27458 Ä 62% sched_debug.cfs_rq:/.load.stddev
> 59.52 Ä 2% -2.3% 58.13 Ä 6% sched_debug.cfs_rq:/.load_avg.avg
> 305.15 Ä 10% -8.8% 278.35 Ä 3% sched_debug.cfs_rq:/.load_avg.max
> 27.20 Ä 6% +8.6% 29.55 sched_debug.cfs_rq:/.load_avg.min
> 71.12 Ä 9% -8.1% 65.38 Ä 5%
> sched_debug.cfs_rq:/.load_avg.stddev
> 0.00 +1.2e+12% 12083 Ä100%
> sched_debug.cfs_rq:/.max_vruntime.avg
> 0.00 +3.4e+13% 338347 Ä100%
> sched_debug.cfs_rq:/.max_vruntime.max
> 0.00 +0.0% 0.00
> sched_debug.cfs_rq:/.max_vruntime.min
> 0.00 +1.5e+28% 62789 Ä100%
> sched_debug.cfs_rq:/.max_vruntime.stddev
> 3364874 +0.8% 3391671
> sched_debug.cfs_rq:/.min_vruntime.avg
> 3399316 +0.7% 3423051
> sched_debug.cfs_rq:/.min_vruntime.max
> 3303899 +0.9% 3335083
> sched_debug.cfs_rq:/.min_vruntime.min
> 20061 Ä 14% -2.5% 19552 Ä 16%
> sched_debug.cfs_rq:/.min_vruntime.stddev
> 0.87 +3.1% 0.89 Ä 2%
> sched_debug.cfs_rq:/.nr_running.avg
> 1.00 +5.0% 1.05 Ä 8%
> sched_debug.cfs_rq:/.nr_running.max
> 0.50 Ä 19% +20.0% 0.60
> sched_debug.cfs_rq:/.nr_running.min
> 0.16 Ä 14% -6.0% 0.15 Ä 11%
> sched_debug.cfs_rq:/.nr_running.stddev
> 4.14 Ä 6% -6.5% 3.87 Ä 10%
> sched_debug.cfs_rq:/.nr_spread_over.avg
> 15.10 Ä 7% +29.8% 19.60 Ä 12%
> sched_debug.cfs_rq:/.nr_spread_over.max
> 1.50 Ä 11% -16.7% 1.25 Ä 20%
> sched_debug.cfs_rq:/.nr_spread_over.min
> 2.82 Ä 8% +23.0% 3.47 Ä 13%
> sched_debug.cfs_rq:/.nr_spread_over.stddev
> 7.31 -6.2% 6.86 Ä 69%
> sched_debug.cfs_rq:/.removed.load_avg.avg
> 204.80 -28.2% 147.10 Ä 57%
> sched_debug.cfs_rq:/.removed.load_avg.max
> 38.01 -20.3% 30.29 Ä 60%
> sched_debug.cfs_rq:/.removed.load_avg.stddev
> 337.44 -6.0% 317.11 Ä 69%
> sched_debug.cfs_rq:/.removed.runnable_sum.avg
> 9448 -27.8% 6819 Ä 57%
> sched_debug.cfs_rq:/.removed.runnable_sum.max
> 1753 -20.2% 1399 Ä 60%
> sched_debug.cfs_rq:/.removed.runnable_sum.stddev
> 2.16 Ä 56% -25.7% 1.60 Ä 57%
> sched_debug.cfs_rq:/.removed.util_avg.avg
> 60.40 Ä 56% -37.3% 37.90 Ä 61%
> sched_debug.cfs_rq:/.removed.util_avg.max
> 11.21 Ä 56% -33.8% 7.42 Ä 58%
> sched_debug.cfs_rq:/.removed.util_avg.stddev
> 30.32 Ä 2% +0.6% 30.52 Ä 3%
> sched_debug.cfs_rq:/.runnable_load_avg.avg
> 80.90 Ä 14% -19.2% 65.35 Ä 34%
> sched_debug.cfs_rq:/.runnable_load_avg.max
> 14.95 Ä 24% +47.2% 22.00 Ä 12%
> sched_debug.cfs_rq:/.runnable_load_avg.min
> 12.10 Ä 19% -27.7% 8.74 Ä 42%
> sched_debug.cfs_rq:/.runnable_load_avg.stddev
> 30853 +13.2% 34914 Ä 10%
> sched_debug.cfs_rq:/.runnable_weight.avg
> 61794 +152.3% 155911 Ä 64%
> sched_debug.cfs_rq:/.runnable_weight.max
> 17867 Ä 19% +30.4% 23300 Ä 13%
> sched_debug.cfs_rq:/.runnable_weight.min
> 8534 Ä 7% +193.7% 25066 Ä 72%
> sched_debug.cfs_rq:/.runnable_weight.stddev
> 45914 Ä 45% -50.7% 22619 Ä 62% sched_debug.cfs_rq:/.spread0.avg
> 80385 Ä 26% -32.8% 53990 Ä 34% sched_debug.cfs_rq:/.spread0.max
> -15013 +126.3% -33973 sched_debug.cfs_rq:/.spread0.min
> 20053 Ä 14% -2.5% 19541 Ä 16%
> sched_debug.cfs_rq:/.spread0.stddev
> 964.19 -0.2% 962.12 sched_debug.cfs_rq:/.util_avg.avg
> 1499 Ä 14% -13.2% 1301 Ä 4% sched_debug.cfs_rq:/.util_avg.max
> 510.75 Ä 12% +35.6% 692.65 Ä 13% sched_debug.cfs_rq:/.util_avg.min
> 177.84 Ä 21% -34.4% 116.72 Ä 23%
> sched_debug.cfs_rq:/.util_avg.stddev
> 768.04 +5.1% 807.40
> sched_debug.cfs_rq:/.util_est_enqueued.avg
> 1192 Ä 14% -22.5% 924.20
> sched_debug.cfs_rq:/.util_est_enqueued.max
> 170.90 Ä 99% +89.6% 324.05 Ä 21%
> sched_debug.cfs_rq:/.util_est_enqueued.min
> 201.22 Ä 17% -36.7% 127.44 Ä 15%
> sched_debug.cfs_rq:/.util_est_enqueued.stddev
> 111567 Ä 4% +7.6% 120067 Ä 7% sched_debug.cpu.avg_idle.avg
> 549432 Ä 16% -16.6% 458264 Ä 5% sched_debug.cpu.avg_idle.max
> 4419 Ä 79% +87.7% 8293 Ä 31% sched_debug.cpu.avg_idle.min
> 123967 Ä 13% -8.9% 112928 Ä 15% sched_debug.cpu.avg_idle.stddev
> 147256 -0.4% 146724 sched_debug.cpu.clock.avg
> 147258 -0.4% 146728 sched_debug.cpu.clock.max
> 147252 -0.4% 146720 sched_debug.cpu.clock.min
> 1.70 Ä 11% +39.5% 2.37 Ä 29% sched_debug.cpu.clock.stddev
> 147256 -0.4% 146724 sched_debug.cpu.clock_task.avg
> 147258 -0.4% 146728 sched_debug.cpu.clock_task.max
> 147252 -0.4% 146720 sched_debug.cpu.clock_task.min
> 1.70 Ä 11% +39.4% 2.37 Ä 29% sched_debug.cpu.clock_task.stddev
> 30.85 +0.4% 30.97 Ä 3% sched_debug.cpu.cpu_load[0].avg
> 84.15 Ä 13% -13.8% 72.55 Ä 21% sched_debug.cpu.cpu_load[0].max
> 17.35 Ä 24% +26.5% 21.95 Ä 24% sched_debug.cpu.cpu_load[0].min
> 12.75 Ä 18% -19.9% 10.22 Ä 22% sched_debug.cpu.cpu_load[0].stddev
> 30.88 +1.1% 31.22 Ä 2% sched_debug.cpu.cpu_load[1].avg
> 77.35 Ä 19% -9.8% 69.75 Ä 18% sched_debug.cpu.cpu_load[1].max
> 18.60 Ä 19% +31.2% 24.40 Ä 10% sched_debug.cpu.cpu_load[1].min
> 11.10 Ä 24% -20.5% 8.82 Ä 25% sched_debug.cpu.cpu_load[1].stddev
> 31.13 +2.3% 31.84 Ä 2% sched_debug.cpu.cpu_load[2].avg
> 71.40 Ä 26% +0.4% 71.70 Ä 20% sched_debug.cpu.cpu_load[2].max
> 19.45 Ä 21% +32.9% 25.85 Ä 5% sched_debug.cpu.cpu_load[2].min
> 9.61 Ä 36% -8.3% 8.81 Ä 27% sched_debug.cpu.cpu_load[2].stddev
> 31.79 +2.9% 32.71 Ä 3% sched_debug.cpu.cpu_load[3].avg
> 76.75 Ä 19% +8.1% 82.95 Ä 37% sched_debug.cpu.cpu_load[3].max
> 20.25 Ä 14% +33.3% 27.00 Ä 3% sched_debug.cpu.cpu_load[3].min
> 9.88 Ä 28% +6.7% 10.54 Ä 52% sched_debug.cpu.cpu_load[3].stddev
> 32.95 +2.3% 33.71 Ä 3% sched_debug.cpu.cpu_load[4].avg
> 107.65 Ä 9% +3.9% 111.90 Ä 33% sched_debug.cpu.cpu_load[4].max
> 20.35 Ä 8% +30.5% 26.55 sched_debug.cpu.cpu_load[4].min
> 15.33 Ä 12% +1.2% 15.51 Ä 46% sched_debug.cpu.cpu_load[4].stddev
> 1244 +2.0% 1269 sched_debug.cpu.curr->pid.avg
> 4194 -0.7% 4165 sched_debug.cpu.curr->pid.max
> 655.90 Ä 22% +16.7% 765.75 Ä 3% sched_debug.cpu.curr->pid.min
> 656.54 -0.8% 651.46 sched_debug.cpu.curr->pid.stddev
> 32481 +13.8% 36973 Ä 9% sched_debug.cpu.load.avg
> 73132 Ä 5% +130.1% 168294 Ä 56% sched_debug.cpu.load.max
> 17867 Ä 19% +20.3% 21497 sched_debug.cpu.load.min
> 11194 Ä 3% +148.6% 27826 Ä 61% sched_debug.cpu.load.stddev
> 500000 +0.0% 500000
> sched_debug.cpu.max_idle_balance_cost.avg
> 500000 +0.0% 500000
> sched_debug.cpu.max_idle_balance_cost.max
> 500000 +0.0% 500000
> sched_debug.cpu.max_idle_balance_cost.min
> 4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
> 4294 -0.0% 4294 sched_debug.cpu.next_balance.max
> 4294 -0.0% 4294 sched_debug.cpu.next_balance.min
> 0.00 Ä 4% -3.6% 0.00 Ä 5%
> sched_debug.cpu.next_balance.stddev
> 126191 -0.0% 126167
> sched_debug.cpu.nr_load_updates.avg
> 133303 -0.6% 132467
> sched_debug.cpu.nr_load_updates.max
> 124404 +0.4% 124852
> sched_debug.cpu.nr_load_updates.min
> 1826 Ä 13% -8.5% 1672 Ä 5%
> sched_debug.cpu.nr_load_updates.stddev
> 0.90 +2.2% 0.92 Ä 2% sched_debug.cpu.nr_running.avg
> 1.80 Ä 7% -5.6% 1.70 Ä 5% sched_debug.cpu.nr_running.max
> 0.50 Ä 19% +20.0% 0.60 sched_debug.cpu.nr_running.min
> 0.29 Ä 7% -13.6% 0.25 Ä 4% sched_debug.cpu.nr_running.stddev
> 204123 Ä 8% -56.8% 88239 Ä 5% sched_debug.cpu.nr_switches.avg
> 457439 Ä 15% -60.4% 181234 Ä 8% sched_debug.cpu.nr_switches.max
> 116531 Ä 16% -62.8% 43365 Ä 13% sched_debug.cpu.nr_switches.min
> 72910 Ä 22% -50.5% 36095 Ä 13% sched_debug.cpu.nr_switches.stddev
> 0.03 Ä 19% -52.9% 0.01 Ä 35%
> sched_debug.cpu.nr_uninterruptible.avg
> 16.50 Ä 14% -11.5% 14.60 Ä 16%
> sched_debug.cpu.nr_uninterruptible.max
> -16.90 -26.3% -12.45
> sched_debug.cpu.nr_uninterruptible.min
> 7.97 Ä 12% -19.4% 6.42 Ä 15%
> sched_debug.cpu.nr_uninterruptible.stddev
> 210655 Ä 8% -56.7% 91289 Ä 5% sched_debug.cpu.sched_count.avg
> 467518 Ä 15% -60.4% 185362 Ä 9% sched_debug.cpu.sched_count.max
> 120764 Ä 16% -62.5% 45233 Ä 14% sched_debug.cpu.sched_count.min
> 74259 Ä 23% -51.0% 36403 Ä 14% sched_debug.cpu.sched_count.stddev
> 89621 Ä 8% -63.5% 32750 Ä 7% sched_debug.cpu.sched_goidle.avg
> 189668 Ä 8% -67.5% 61630 Ä 18% sched_debug.cpu.sched_goidle.max
> 54342 Ä 16% -68.0% 17412 Ä 15% sched_debug.cpu.sched_goidle.min
> 28685 Ä 15% -58.7% 11834 Ä 16%
> sched_debug.cpu.sched_goidle.stddev
> 109820 Ä 8% -56.0% 48303 Ä 5% sched_debug.cpu.ttwu_count.avg
> 144975 Ä 13% -41.1% 85424 Ä 7% sched_debug.cpu.ttwu_count.max
> 96409 Ä 8% -61.1% 37542 Ä 6% sched_debug.cpu.ttwu_count.min
> 12310 Ä 20% -2.4% 12009 Ä 14% sched_debug.cpu.ttwu_count.stddev
> 9749 Ä 10% -5.0% 9257 Ä 6% sched_debug.cpu.ttwu_local.avg
> 45094 Ä 30% +4.8% 47270 Ä 13% sched_debug.cpu.ttwu_local.max
> 1447 Ä 6% +1.2% 1465 Ä 7% sched_debug.cpu.ttwu_local.min
> 12231 Ä 24% -6.1% 11487 Ä 13% sched_debug.cpu.ttwu_local.stddev
> 147253 -0.4% 146720 sched_debug.cpu_clk
> 996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
> 996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
> 996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
> 4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
> 147253 -0.4% 146720 sched_debug.ktime
> 950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
> 950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
> 950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
> 0.00 Ä146% -69.8% 0.00 Ä100% sched_debug.rt_rq:/.rt_time.avg
> 0.04 Ä146% -69.8% 0.01 Ä100% sched_debug.rt_rq:/.rt_time.max
> 0.01 Ä146% -69.8% 0.00 Ä100% sched_debug.rt_rq:/.rt_time.stddev
> 147626 -0.3% 147114 sched_debug.sched_clk
> 1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
> 4118331 +0.0% 4118331
> sched_debug.sysctl_sched.sysctl_sched_features
> 24.00 +0.0% 24.00
> sched_debug.sysctl_sched.sysctl_sched_latency
> 3.00 +0.0% 3.00
> sched_debug.sysctl_sched.sysctl_sched_min_granularity
> 1.00 +0.0% 1.00
> sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
> 4.00 +0.0% 4.00
> sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
> 68.63 -2.2 66.43
> perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
> 73.63 -1.9 71.69
> perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.vma_link.mmap_region.do_mmap
> 73.63 -1.9 71.69
> perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link.mmap_region
> 73.99 -1.9 72.05
> perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 78.36 -1.7 76.67
> perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
> 79.68 -1.5 78.19
> perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
> 81.34 -1.2 80.10
> perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 83.36 -1.2 82.15
> perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 81.59 -1.2 80.39
> perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 83.38 -1.2 82.18
> perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
> 82.00 -1.2 80.83
> perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 2.44 Ä 5% -0.7 1.79
> perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
> 3.17 Ä 4% -0.6 2.62
> perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
> 1.15 Ä 2% -0.2 0.97 Ä 2%
> perf-profile.calltrace.cycles-pp.up_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 1.27 Ä 18% -0.0 1.24 Ä 16%
> perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 1.27 Ä 18% -0.0 1.24 Ä 16%
> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 1.27 Ä 18% -0.0 1.24 Ä 16%
> perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.55 Ä 4% +0.0 0.58 Ä 3%
> perf-profile.calltrace.cycles-pp.__rb_insert_augmented.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 0.94 Ä 14% +0.1 1.02 Ä 12%
> perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 0.94 Ä 14% +0.1 1.02 Ä 12%
> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 0.94 Ä 14% +0.1 1.02 Ä 12%
> perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 6.67 +0.1 6.76
> perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
> 0.95 Ä 14% +0.1 1.04 Ä 11%
> perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 6.70 +0.1 6.79
> perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
> 6.75 +0.1 6.86
> perf-profile.calltrace.cycles-pp.page_fault
> 0.61 Ä 6% +0.1 0.75 Ä 3%
> perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
> 1.09 Ä 2% +0.1 1.23 Ä 3%
> perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.__do_page_fault.do_page_fault.page_fault
> 0.74 Ä 4% +0.2 0.91 Ä 3%
> perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
> 1.35 +0.2 1.56
> perf-profile.calltrace.cycles-pp.unmapped_area_topdown.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff
> 1.39 +0.2 1.61
> perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
> 1.56 +0.2 1.78 Ä 2%
> perf-profile.calltrace.cycles-pp.find_vma.__do_page_fault.do_page_fault.page_fault
> 1.50 +0.2 1.72
> perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
> 2.71 +0.3 3.05
> perf-profile.calltrace.cycles-pp.native_irq_return_iret
> 2.15 +0.4 2.50 Ä 3%
> perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 2.70 Ä 5% +0.4 3.07 Ä 3%
> perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> 3.83 +0.4 4.26
> perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
> 0.00 +0.5 0.52
> perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
> 68.65 -2.2 66.45
> perf-profile.children.cycles-pp.osq_lock
> 73.63 -1.9 71.69
> perf-profile.children.cycles-pp.call_rwsem_down_write_failed
> 73.63 -1.9 71.69
> perf-profile.children.cycles-pp.rwsem_down_write_failed
> 73.99 -1.9 72.05
> perf-profile.children.cycles-pp.down_write
> 78.36 -1.7 76.68
> perf-profile.children.cycles-pp.vma_link
> 79.70 -1.5 78.21
> perf-profile.children.cycles-pp.mmap_region
> 81.35 -1.2 80.12
> perf-profile.children.cycles-pp.do_mmap
> 81.61 -1.2 80.41
> perf-profile.children.cycles-pp.vm_mmap_pgoff
> 83.48 -1.2 82.28
> perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
> 83.46 -1.2 82.26
> perf-profile.children.cycles-pp.do_syscall_64
> 82.01 -1.2 80.85
> perf-profile.children.cycles-pp.ksys_mmap_pgoff
> 2.51 Ä 5% -0.6 1.87
> perf-profile.children.cycles-pp.__handle_mm_fault
> 3.24 Ä 4% -0.5 2.70
> perf-profile.children.cycles-pp.handle_mm_fault
> 1.23 Ä 2% -0.2 1.06 Ä 2%
> perf-profile.children.cycles-pp.up_write
> 0.20 Ä 12% -0.1 0.10 Ä 7%
> perf-profile.children.cycles-pp.do_idle
> 0.20 Ä 12% -0.1 0.10 Ä 7%
> perf-profile.children.cycles-pp.secondary_startup_64
> 0.20 Ä 12% -0.1 0.10 Ä 7%
> perf-profile.children.cycles-pp.cpu_startup_entry
> 0.18 Ä 13% -0.1 0.09 Ä 13%
> perf-profile.children.cycles-pp.start_secondary
> 0.26 Ä 11% -0.1 0.18 Ä 8%
> perf-profile.children.cycles-pp.rwsem_wake
> 0.26 Ä 11% -0.1 0.18 Ä 6%
> perf-profile.children.cycles-pp.call_rwsem_wake
> 0.07 Ä 17% -0.1 0.00
> perf-profile.children.cycles-pp.intel_idle
> 0.08 Ä 14% -0.1 0.01 Ä173%
> perf-profile.children.cycles-pp.cpuidle_enter_state
> 0.06 Ä 13% -0.1 0.00
> perf-profile.children.cycles-pp.schedule
> 0.06 Ä 6% -0.1 0.00
> perf-profile.children.cycles-pp.save_stack_trace_tsk
> 0.18 Ä 9% -0.1 0.12
> perf-profile.children.cycles-pp.wake_up_q
> 0.06 Ä 7% -0.1 0.00
> perf-profile.children.cycles-pp.sched_ttwu_pending
> 0.06 Ä 7% -0.1 0.00
> perf-profile.children.cycles-pp.__save_stack_trace
> 0.18 Ä 8% -0.1 0.13
> perf-profile.children.cycles-pp.try_to_wake_up
> 0.05 -0.1 0.00
> perf-profile.children.cycles-pp.unwind_next_frame
> 0.11 Ä 6% -0.0 0.06 Ä 13%
> perf-profile.children.cycles-pp.enqueue_task_fair
> 0.12 Ä 9% -0.0 0.07 Ä 10%
> perf-profile.children.cycles-pp.ttwu_do_activate
> 0.08 Ä 5% -0.0 0.04 Ä 57%
> perf-profile.children.cycles-pp.__account_scheduler_latency
> 0.11 Ä 10% -0.0 0.06 Ä 13%
> perf-profile.children.cycles-pp.enqueue_entity
> 0.10 Ä 17% -0.0 0.06 Ä 7%
> perf-profile.children.cycles-pp.__schedule
> 0.36 Ä 5% -0.0 0.32 Ä 7%
> perf-profile.children.cycles-pp.osq_unlock
> 0.03 Ä100% -0.0 0.00
> perf-profile.children.cycles-pp.serial8250_console_write
> 0.03 Ä100% -0.0 0.00
> perf-profile.children.cycles-pp.uart_console_write
> 0.03 Ä100% -0.0 0.00
> perf-profile.children.cycles-pp.wait_for_xmitr
> 0.03 Ä100% -0.0 0.00
> perf-profile.children.cycles-pp.serial8250_console_putchar
> 0.03 Ä100% -0.0 0.00
> perf-profile.children.cycles-pp.__softirqentry_text_start
> 0.08 Ä 11% -0.0 0.05 Ä 9%
> perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> 0.03 Ä100% -0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.console_unlock
> 0.05 Ä 9% -0.0 0.04 Ä 58%
> perf-profile.children.cycles-pp.update_load_avg
> 0.03 Ä100% -0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.irq_work_run_list
> 0.03 Ä100% -0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
> 0.03 Ä100% -0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.irq_exit
> 0.01 Ä173% -0.0 0.00
> perf-profile.children.cycles-pp.process_one_work
> 0.01 Ä173% -0.0 0.00
> perf-profile.children.cycles-pp.ktime_get
> 0.01 Ä173% -0.0 0.00
> perf-profile.children.cycles-pp.__vma_link_file
> 0.37 Ä 3% -0.0 0.36 Ä 5%
> perf-profile.children.cycles-pp.apic_timer_interrupt
> 0.24 Ä 6% -0.0 0.23 Ä 6%
> perf-profile.children.cycles-pp.__hrtimer_run_queues
> 0.37 Ä 4% -0.0 0.36 Ä 5%
> perf-profile.children.cycles-pp.smp_apic_timer_interrupt
> 0.06 Ä 14% -0.0 0.05 Ä 9%
> perf-profile.children.cycles-pp.file_has_perm
> 0.31 Ä 5% -0.0 0.31 Ä 7%
> perf-profile.children.cycles-pp.hrtimer_interrupt
> 0.16 Ä 10% -0.0 0.15 Ä 7%
> perf-profile.children.cycles-pp.update_process_times
> 0.04 Ä 57% -0.0 0.04 Ä 58%
> perf-profile.children.cycles-pp.write
> 0.06 Ä 7% -0.0 0.06 Ä 9%
> perf-profile.children.cycles-pp.native_iret
> 0.62 Ä 5% +0.0 0.62 Ä 2%
> perf-profile.children.cycles-pp.__rb_insert_augmented
> 0.16 Ä 10% +0.0 0.16 Ä 9%
> perf-profile.children.cycles-pp.tick_sched_handle
> 0.01 Ä173% +0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.ksys_write
> 0.01 Ä173% +0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.worker_thread
> 0.18 Ä 9% +0.0 0.18 Ä 8%
> perf-profile.children.cycles-pp.tick_sched_timer
> 0.05 Ä 8% +0.0 0.06 Ä 9%
> perf-profile.children.cycles-pp._cond_resched
> 0.11 Ä 4% +0.0 0.11 Ä 3%
> perf-profile.children.cycles-pp.scheduler_tick
> 0.08 Ä 10% +0.0 0.09 Ä 13%
> perf-profile.children.cycles-pp.mem_cgroup_from_task
> 0.08 Ä 5% +0.0 0.09 Ä 7%
> perf-profile.children.cycles-pp.task_tick_fair
> 0.06 +0.0 0.07 Ä 7%
> perf-profile.children.cycles-pp.security_mmap_addr
> 0.07 Ä 7% +0.0 0.07 Ä 5%
> perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
> 0.09 Ä 5% +0.0 0.10 Ä 5%
> perf-profile.children.cycles-pp.vma_gap_callbacks_rotate
> 0.01 Ä173% +0.0 0.03 Ä100%
> perf-profile.children.cycles-pp.ret_from_fork
> 0.01 Ä173% +0.0 0.03 Ä100%
> perf-profile.children.cycles-pp.kthread
> 0.06 Ä 13% +0.0 0.07 Ä 5%
> perf-profile.children.cycles-pp.fput
> 0.07 Ä 7% +0.0 0.08 Ä 6%
> perf-profile.children.cycles-pp.__slab_alloc
> 0.05 +0.0 0.06 Ä 6%
> perf-profile.children.cycles-pp.prepend_path
> 0.05 +0.0 0.06 Ä 6%
> perf-profile.children.cycles-pp.new_slab
> 0.00 +0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.ktime_get_update_offsets_now
> 0.00 +0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.vfs_write
> 0.00 +0.0 0.01 Ä173%
> perf-profile.children.cycles-pp.down_write_killable
> 0.06 Ä 6% +0.0 0.08 Ä 6%
> perf-profile.children.cycles-pp.___slab_alloc
> 0.07 +0.0 0.08 Ä 5%
> perf-profile.children.cycles-pp.perf_exclude_event
> 0.04 Ä 58% +0.0 0.06 Ä 9%
> perf-profile.children.cycles-pp.get_page_from_freelist
> 0.04 Ä 58% +0.0 0.06 Ä 7%
> perf-profile.children.cycles-pp.__alloc_pages_nodemask
> 0.03 Ä100% +0.0 0.04 Ä 58%
> perf-profile.children.cycles-pp.__pte_alloc
> 0.21 Ä 3% +0.0 0.23 Ä 6%
> perf-profile.children.cycles-pp.__entry_trampoline_start
> 0.21 Ä 4% +0.0 0.23 Ä 2%
> perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
> 0.11 Ä 7% +0.0 0.14 Ä 8%
> perf-profile.children.cycles-pp.selinux_mmap_file
> 0.04 Ä 57% +0.0 0.06
> perf-profile.children.cycles-pp.kfree
> 0.04 Ä 57% +0.0 0.06 Ä 11%
> perf-profile.children.cycles-pp.avc_has_perm
> 0.17 Ä 4% +0.0 0.19 Ä 4%
> perf-profile.children.cycles-pp.d_path
> 0.00 +0.0 0.03 Ä100%
> perf-profile.children.cycles-pp.pte_alloc_one
> 0.01 Ä173% +0.0 0.04 Ä 57%
> perf-profile.children.cycles-pp.perf_swevent_event
> 0.12 Ä 10% +0.0 0.15 Ä 7%
> perf-profile.children.cycles-pp.security_mmap_file
> 0.18 Ä 3% +0.0 0.21 Ä 3%
> perf-profile.children.cycles-pp.up_read
> 0.14 Ä 3% +0.0 0.17 Ä 3%
> perf-profile.children.cycles-pp.__might_sleep
> 0.32 Ä 3% +0.0 0.35 Ä 4%
> perf-profile.children.cycles-pp.__fget
> 0.12 Ä 3% +0.0 0.15 Ä 2%
> perf-profile.children.cycles-pp.kmem_cache_alloc_trace
> 0.16 Ä 4% +0.0 0.19 Ä 5%
> perf-profile.children.cycles-pp.kmem_cache_alloc
> 0.21 Ä 3% +0.0 0.25 Ä 4%
> perf-profile.children.cycles-pp.down_read_trylock
> 0.25 Ä 5% +0.0 0.29 Ä 4%
> perf-profile.children.cycles-pp.__vma_link_rb
> 0.17 Ä 6% +0.0 0.21 Ä 7%
> perf-profile.children.cycles-pp.vma_compute_subtree_gap
> 2.22 Ä 15% +0.0 2.26 Ä 14%
> perf-profile.children.cycles-pp.exit_to_usermode_loop
> 0.21 Ä 7% +0.0 0.26 Ä 4%
> perf-profile.children.cycles-pp._raw_spin_lock
> 0.00 +0.0 0.04 Ä 58%
> perf-profile.children.cycles-pp.perf_iterate_sb
> 2.22 Ä 15% +0.0 2.27 Ä 14%
> perf-profile.children.cycles-pp.task_numa_work
> 0.16 Ä 5% +0.0 0.20 Ä 4%
> perf-profile.children.cycles-pp.___might_sleep
> 0.24 Ä 16% +0.0 0.29 Ä 13%
> perf-profile.children.cycles-pp.vma_policy_mof
> 2.21 Ä 15% +0.0 2.26 Ä 14%
> perf-profile.children.cycles-pp.task_work_run
> 0.37 Ä 3% +0.0 0.42 Ä 5%
> perf-profile.children.cycles-pp.sync_regs
> 0.08 Ä 23% +0.0 0.13 Ä 14%
> perf-profile.children.cycles-pp.get_task_policy
> 0.39 Ä 2% +0.1 0.45
> perf-profile.children.cycles-pp.syscall_return_via_sysret
> 0.46 +0.1 0.53 Ä 2%
> perf-profile.children.cycles-pp.perf_event_mmap
> 0.97 Ä 14% +0.1 1.05 Ä 11%
> perf-profile.children.cycles-pp.prepare_exit_to_usermode
> 6.75 +0.1 6.85
> perf-profile.children.cycles-pp.do_page_fault
> 6.76 +0.1 6.86
> perf-profile.children.cycles-pp.page_fault
> 6.77 +0.1 6.87
> perf-profile.children.cycles-pp.__do_page_fault
> 1.10 Ä 2% +0.1 1.25 Ä 3%
> perf-profile.children.cycles-pp.vmacache_find
> 0.63 Ä 6% +0.2 0.78 Ä 3%
> perf-profile.children.cycles-pp.___perf_sw_event
> 0.75 Ä 4% +0.2 0.93 Ä 3%
> perf-profile.children.cycles-pp.__perf_sw_event
> 1.35 +0.2 1.56
> perf-profile.children.cycles-pp.unmapped_area_topdown
> 1.42 +0.2 1.63
> perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
> 1.58 +0.2 1.81 Ä 2%
> perf-profile.children.cycles-pp.find_vma
> 1.50 +0.2 1.73
> perf-profile.children.cycles-pp.get_unmapped_area
> 2.72 +0.3 3.06
> perf-profile.children.cycles-pp.native_irq_return_iret
> 2.15 +0.4 2.50 Ä 3%
> perf-profile.children.cycles-pp.vma_interval_tree_insert
> 2.70 Ä 5% +0.4 3.07 Ä 3%
> perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> 3.83 +0.4 4.26
> perf-profile.children.cycles-pp.rwsem_spin_on_owner
> 68.47 -2.2 66.29
> perf-profile.self.cycles-pp.osq_lock
> 2.16 Ä 7% -0.7 1.45
> perf-profile.self.cycles-pp.__handle_mm_fault
> 0.74 Ä 3% -0.1 0.64 Ä 2%
> perf-profile.self.cycles-pp.rwsem_down_write_failed
> 0.97 -0.1 0.89
> perf-profile.self.cycles-pp.up_write
> 0.07 Ä 17% -0.1 0.00
> perf-profile.self.cycles-pp.intel_idle
> 0.04 Ä 58% -0.0 0.00
> perf-profile.self.cycles-pp._raw_spin_lock_irqsave
> 0.36 Ä 5% -0.0 0.32 Ä 7%
> perf-profile.self.cycles-pp.osq_unlock
> 2.00 Ä 14% -0.0 1.99 Ä 14%
> perf-profile.self.cycles-pp.task_numa_work
> 0.01 Ä173% -0.0 0.00
> perf-profile.self.cycles-pp.__vma_link_file
> 0.62 Ä 5% -0.0 0.61 Ä 2%
> perf-profile.self.cycles-pp.__rb_insert_augmented
> 0.06 Ä 7% -0.0 0.06 Ä 9%
> perf-profile.self.cycles-pp.native_iret
> 0.01 Ä173% +0.0 0.01 Ä173%
> perf-profile.self.cycles-pp.ksys_mmap_pgoff
> 0.01 Ä173% +0.0 0.01 Ä173%
> perf-profile.self.cycles-pp._cond_resched
> 0.06 Ä 9% +0.0 0.06 Ä 14%
> perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
> 0.34 Ä 5% +0.0 0.35 Ä 3%
> perf-profile.self.cycles-pp.down_write
> 0.08 Ä 5% +0.0 0.09 Ä 4%
> perf-profile.self.cycles-pp.perf_event_mmap
> 0.11 Ä 7% +0.0 0.11 Ä 3%
> perf-profile.self.cycles-pp.__vma_link_rb
> 0.08 Ä 10% +0.0 0.09 Ä 13%
> perf-profile.self.cycles-pp.mem_cgroup_from_task
> 0.07 +0.0 0.08 Ä 5%
> perf-profile.self.cycles-pp.do_page_fault
> 0.06 Ä 6% +0.0 0.07 Ä 10%
> perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
> 0.07 Ä 7% +0.0 0.07 Ä 5%
> perf-profile.self.cycles-pp.vma_gap_callbacks_rotate
> 0.12 Ä 8% +0.0 0.13 Ä 6% perf-profile.self.cycles-pp.d_path
> 0.06 +0.0 0.07 Ä 10%
> perf-profile.self.cycles-pp.kmem_cache_alloc
> 0.00 +0.0 0.01 Ä173%
> perf-profile.self.cycles-pp.kmem_cache_alloc_trace
> 0.00 +0.0 0.01 Ä173%
> perf-profile.self.cycles-pp.prepend_path
> 0.06 Ä 11% +0.0 0.07 Ä 5% perf-profile.self.cycles-pp.fput
> 0.07 +0.0 0.08 Ä 5%
> perf-profile.self.cycles-pp.perf_exclude_event
> 0.21 Ä 4% +0.0 0.22
> perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
> 0.21 Ä 3% +0.0 0.23 Ä 6%
> perf-profile.self.cycles-pp.__entry_trampoline_start
> 0.04 Ä 57% +0.0 0.06 Ä 7% perf-profile.self.cycles-pp.kfree
> 0.04 Ä 57% +0.0 0.06 Ä 11%
> perf-profile.self.cycles-pp.avc_has_perm
> 0.16 Ä 13% +0.0 0.19 Ä 13%
> perf-profile.self.cycles-pp.vma_policy_mof
> 0.01 Ä173% +0.0 0.04 Ä 57%
> perf-profile.self.cycles-pp.perf_swevent_event
> 0.14 Ä 3% +0.0 0.16 Ä 2%
> perf-profile.self.cycles-pp.__might_sleep
> 0.32 Ä 3% +0.0 0.34 Ä 5% perf-profile.self.cycles-pp.__fget
> 0.18 Ä 3% +0.0 0.21 Ä 3%
> perf-profile.self.cycles-pp.up_read
> 0.15 Ä 7% +0.0 0.17 Ä 2%
> perf-profile.self.cycles-pp.__perf_sw_event
> 0.14 Ä 10% +0.0 0.18 Ä 4%
> perf-profile.self.cycles-pp.do_mmap
> 0.21 Ä 3% +0.0 0.25 Ä 4%
> perf-profile.self.cycles-pp.down_read_trylock
> 0.17 Ä 6% +0.0 0.21 Ä 7%
> perf-profile.self.cycles-pp.vma_compute_subtree_gap
> 0.21 Ä 7% +0.0 0.25 Ä 3%
> perf-profile.self.cycles-pp._raw_spin_lock
> 0.00 +0.0 0.04 Ä 58%
> perf-profile.self.cycles-pp.perf_iterate_sb
> 0.16 Ä 5% +0.0 0.20 Ä 4%
> perf-profile.self.cycles-pp.___might_sleep
> 0.08 Ä 23% +0.0 0.12 Ä 14%
> perf-profile.self.cycles-pp.get_task_policy
> 0.37 Ä 3% +0.0 0.42 Ä 5%
> perf-profile.self.cycles-pp.sync_regs
> 0.00 +0.1 0.05
> perf-profile.self.cycles-pp.do_syscall_64
> 0.39 Ä 2% +0.1 0.45
> perf-profile.self.cycles-pp.syscall_return_via_sysret
> 0.48 +0.1 0.55
> perf-profile.self.cycles-pp.find_vma
> 0.69 Ä 2% +0.1 0.77 Ä 2%
> perf-profile.self.cycles-pp.mmap_region
> 0.70 Ä 3% +0.1 0.81 Ä 2%
> perf-profile.self.cycles-pp.handle_mm_fault
> 0.54 Ä 7% +0.1 0.69 Ä 3%
> perf-profile.self.cycles-pp.___perf_sw_event
> 1.10 Ä 2% +0.1 1.25 Ä 3%
> perf-profile.self.cycles-pp.vmacache_find
> 0.73 +0.1 0.88 Ä 5%
> perf-profile.self.cycles-pp.__do_page_fault
> 1.35 +0.2 1.56
> perf-profile.self.cycles-pp.unmapped_area_topdown
> 1.75 +0.3 2.03
> perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> 2.72 +0.3 3.06
> perf-profile.self.cycles-pp.native_irq_return_iret
> 2.15 +0.3 2.49 Ä 3%
> perf-profile.self.cycles-pp.vma_interval_tree_insert
> 3.82 +0.4 4.25
> perf-profile.self.cycles-pp.rwsem_spin_on_owner
>
>
>
> vm-scalability.throughput
>
> 2.25e+07 +-+--------------------------------------------------------------+
> | O O |
> 2.2e+07 O-+O O OO O O OO O O O O O |
> 2.15e+07 +-+ O O O |
> | O O |
> 2.1e+07 +-+ |
> | |
> 2.05e+07 +-+ |
> | |
> 2e+07 +-+ +. +. .++.+ ++ +. + + +.+ |
> 1.95e+07 +-+ .+ : + + + + + .+ +.++.+ + .+ + : +.+ .|
> | ++ +: ++ + + +.+ + +.+ + + |
> 1.9e+07 +-+ + + + :+ : |
> | + + |
> 1.85e+07 +-+--------------------------------------------------------------+
>
>
> vm-scalability.median
>
> 800000 +-O----------------------------------------------------------------+
> O O O O O O O O O |
> 780000 +-+O O O O O O O |
> | O O O O |
> 760000 +-+ |
> | |
> 740000 +-+ |
> | |
> 720000 +-+ .+ .+ |
> | + +.+.++.+ +. + + .+. .+ .+.+ +.+ ++. .+ |
> 700000 +-+ + : : +. : ++ + + :+ +. + + : .+ + +.|
> | ++ :: + + + :.+ :+ + : |
> 680000 +-+ + + + + |
> | |
> 660000 +-+----------------------------------------------------------------+
>
>
> vm-scalability.time.voluntary_context_switches
>
> 8e+06 +-+-----------------------------------------------------------------+
> | +. +. |
> 7e+06 +-+ : + + +. + +. + + |
> | : :+ + : +. + + : :: .+ |
> 6e+06 +-++.+.+. + + + + : : : .+.++.+ +.+.++. .++.+. |
> |+ : + + +.+ +.+ + .|
> 5e+06 +-+ + |
> | |
> 4e+06 +-+ |
> | |
> 3e+06 +-O O O O OO O |
> O O O O O O |
> 2e+06 +-+ O O OO O OO O |
> | |
> 1e+06 +-+-----------------------------------------------------------------+
>
>
> [*] bisect-good sample
> [O] bisect-bad sample
>
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> Thanks,
> Xiaolong