Re: [PATCH bpf v4] bpf: verifier: prevent userspace memory access

From: Alexei Starovoitov
Date: Sun Mar 24 2024 - 15:12:30 EST


On Sun, Mar 24, 2024 at 3:44 AM Puranjay Mohan <puranjay12@xxxxxxxxx> wrote:
>
> Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> writes:
>
> > On Fri, Mar 22, 2024 at 9:28 AM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
> >>
> >> On 3/22/24 4:05 PM, Puranjay Mohan wrote:
> >> [...]
> >> >>> + /* Make it impossible to de-reference a userspace address */
> >> >>> + if (BPF_CLASS(insn->code) == BPF_LDX &&
> >> >>> + (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
> >> >>> + BPF_MODE(insn->code) == BPF_PROBE_MEMSX)) {
> >> >>> + struct bpf_insn *patch = &insn_buf[0];
> >> >>> + u64 uaddress_limit = bpf_arch_uaddress_limit();
> >> >>> +
> >> >>> + if (!uaddress_limit)
> >> >>> + goto next_insn;
> >> >>> +
> >> >>> + *patch++ = BPF_MOV64_REG(BPF_REG_AX, insn->src_reg);
> >> >>> + if (insn->off)
> >> >>> + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_AX, insn->off);
> >> >>> + *patch++ = BPF_ALU64_IMM(BPF_RSH, BPF_REG_AX, 32);
> >> >>> + *patch++ = BPF_JMP_IMM(BPF_JLE, BPF_REG_AX, uaddress_limit >> 32, 2);
> >> >>> + *patch++ = *insn;
> >> >>> + *patch++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
> >> >>> + *patch++ = BPF_MOV64_IMM(insn->dst_reg, 0);
> >> >>
> >> >> But how does this address other cases where we could fault e.g. non-canonical,
> >> >> vsyscall page, etc? Technically, we would have to call to copy_from_kernel_nofault_allowed()
> >> >> to really address all the cases aside from the overflow (good catch btw!) where kernel
> >> >> turns into user address.
> >> >
> >> > So, we are trying to ~simulate a call to
> >> > copy_from_kernel_nofault_allowed() here. If the address under
> >> > consideration is below TASK_SIZE (TASK_SIZE + 4GB to be precise) then we
> >> > skip that load because that address could be mapped by the user.
> >> >
> >> > If the address is above TASK_SIZE + 4GB, we allow the load and it could
> >> > cause a fault if the address is invalid, non-canonical etc. Taking the
> >> > fault is fine because JIT will add an exception table entry for
> >> > for that load with BPF_PBOBE_MEM.
> >>
> >> Are you sure? I don't think the kernel handles non-canonical fixup.
> >
> > I believe it handles it fine otherwise our selftest bpf_testmod_return_ptr:
> > case 4: return (void *)(1ull << 60); /* non-canonical and invalid */
> > would have been crashing for the last 3 years,
> > since we've been running it.
> >
> >> > The vsyscall page is special, this approach skips all loads from this
> >> > page. I am not sure if that is acceptable.
> >>
> >> The bpf_probe_read_kernel() does handle it fine via copy_from_kernel_nofault().
> >>
> >> So there is tail risk that BPF_PROBE_* could trigger a crash.
> >
> > For this patch let's do
> > return max(TASK_SIZE_MAX + PAGE_SIZE, VSYSCALL_ADDR)
> > to cover both with one check?
>
> I agree, will add this in the next version.

Sorry. I didn't look at actual value when I suggested this.
Let's think how to check for them with one conditional branch.
Maybe we can REG_AX >>= 24 instead of 32
and do some clever math, since we need to:
addr < 0x7ffffful || addr == 0xfffffffffful
?
If we have to do two branches that's fine for now,
but we probably need to leave it to JITs,
since they can emit two faster conditional moves and use
single conditional jump or
introduce new pseudo bpf insns equivalent to x86 set[ae|ne|..]