Re: [GIT PULL v2] Networking for 6.7

From: Yonghong Song
Date: Thu Nov 09 2023 - 13:41:58 EST



On 11/9/23 10:18 AM, Alexei Starovoitov wrote:
On Thu, Nov 9, 2023 at 10:09 AM Yonghong Song <yonghong.song@xxxxxxxxx> wrote:

On 11/9/23 8:14 AM, Kirill A. Shutemov wrote:
On Thu, Nov 09, 2023 at 08:01:39AM -0800, Alexei Starovoitov wrote:
On Thu, Nov 9, 2023 at 7:49 AM Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> wrote:
On Tue, Oct 31, 2023 at 02:09:48PM -0700, Jakub Kicinski wrote:
bpf: Add support for non-fix-size percpu mem allocation
Recent changes in BPF increased per-CPU memory consumption a lot.

On virtual machine with 288 CPUs, per-CPU consumtion increased from 111 MB
to 969 MB, or 8.7x.

I've bisected it to the commit 41a5db8d8161 ("bpf: Add support for
non-fix-size percpu mem allocation"), which part of the pull request.
Hmm. This is unexpected. Thank you for reporting.

How did you measure this 111 MB vs 969 MB ?
Pls share the steps to reproduce.
Boot VMM with 288 (qemu-system-x86_64 -smp 288) and check Percpu: field of
/proc/meminfo.
I did some experiments with my VM. My VM currently supports up to 255 cpus,
so I tried 4/32/252 number of cpus. For a particular number of cpus, two
experiments are done:
(1). bpf-percpu-mem-prefill
(2). no-bpf-percpu-mem-prefill

For 4 cpu:
bpf-percpu-mem-prefill:
Percpu: 2000 kB
no-bpf-percpu-mem-prefill:
Percpu: 1808 kB

bpf-percpu-mem-prefill percpu cost: (2000 - 1808)/4 KB = 48KB

For 32 cpus:
bpf-percpu-mem-prefill:
Percpu: 25344 kB
no-bpf-percpu-mem-prefill:
Percpu: 14464 kB

bpf-percpu-mem-prefill percpu cost: (25344 - 14464)/4 KB = 340KB

For 252 cpus:
bpf-percpu-mem-prefill:
Percpu: 230912 kB
no-bpf-percpu-mem-prefill:
Percpu: 57856 kB

bpf-percpu-mem-prefill percpu cost: (230912 - 57856)/4 KB = 686KB

I am not able to reproduce the dramatic number from 111 MB to 969 MB.
My number with 252 cpus is from ~58MB to ~231MB.
Even 231MB is way too much. We shouldn't be allocating that much.
Let's switch to on-demand allocation. Only when bpf progs that
user per-cpu are loaded.
Sounds good. Will craft a patch for this.