Re: [Bug Report] bpf: incorrectly pruning runtime execution path

From: Eduard Zingerman
Date: Thu Dec 14 2023 - 21:28:53 EST


On Thu, 2023-12-14 at 18:16 -0800, Alexei Starovoitov wrote:
[...]
> > E.g. for the test-case at hand:
> >
> > 0: (85) call bpf_get_prandom_u32#7 ; R0=scalar()
> > 1: (bf) r7 = r0 ; R0=scalar(id=1) R7_w=scalar(id=1)
> > 2: (bf) r8 = r0 ; R0=scalar(id=1) R8_w=scalar(id=1)
> > 3: (85) call bpf_get_prandom_u32#7 ; R0=scalar()
> > --- checkpoint #1 r7.id = 1, r8.id = 1 ---
> > 4: (25) if r0 > 0x1 goto pc+0 ; R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=1,...)
> > --- checkpoint #2 r7.id = 1, r8.id = 1 ---
> > 5: (3d) if r8 >= r0 goto pc+3 ; R0=1 R8=0 | record r8.id=1 in jump history
> > 6: (0f) r8 += r8 ; R8=0
>
> can we detect that any register link is broken and force checkpoint here?

Should be possible. I'll try this in the morning and check veristat results.

By the way, I added some stats collection for find_equal_scalars() and see
the following results when run on ./test_progs:
- maximal number of registers with same id per call: 3
- average number of registers with same id per call: 1.4