Re: [PATCH v3 1/3] bpf: use check_zeroed_user() in bpf_check_uarg_tail_zero()

From: Alexei Starovoitov
Date: Wed Oct 16 2019 - 01:23:33 EST


On Wed, Oct 16, 2019 at 05:44:30AM +0200, Christian Brauner wrote:
> In v5.4-rc2 we added a new helper (cf. [1]) check_zeroed_user() which
> does what bpf_check_uarg_tail_zero() is doing generically. We're slowly
> switching such codepaths over to use check_zeroed_user() instead of
> using their own hand-rolled version.
>
> [1]: f5a1a536fa14 ("lib: introduce copy_struct_from_user() helper")
> Cc: Alexei Starovoitov <ast@xxxxxxxxxx>
> Cc: Daniel Borkmann <daniel@xxxxxxxxxxxxx>
> Cc: bpf@xxxxxxxxxxxxxxx
> Acked-by: Aleksa Sarai <cyphar@xxxxxxxxxx>
> Signed-off-by: Christian Brauner <christian.brauner@xxxxxxxxxx>
> ---
> /* v1 */
> Link: https://lore.kernel.org/r/20191009160907.10981-2-christian.brauner@xxxxxxxxxx
>
> /* v2 */
> Link: https://lore.kernel.org/r/20191016004138.24845-2-christian.brauner@xxxxxxxxxx
> - Alexei Starovoitov <ast@xxxxxxxxxx>:
> - Add a comment in bpf_check_uarg_tail_zero() to clarify that
> copy_struct_from_user() should be used whenever possible instead.
>
> /* v3 */
> - Christian Brauner <christian.brauner@xxxxxxxxxx>:
> - use correct checks for check_zeroed_user()
> ---
> kernel/bpf/syscall.c | 25 +++++++++----------------
> 1 file changed, 9 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 82eabd4e38ad..40edcaeccd71 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -58,35 +58,28 @@ static const struct bpf_map_ops * const bpf_map_types[] = {
> * There is a ToCToU between this function call and the following
> * copy_from_user() call. However, this is not a concern since this function is
> * meant to be a future-proofing of bits.
> + *
> + * Note, instead of using bpf_check_uarg_tail_zero() followed by
> + * copy_from_user() use the dedicated copy_struct_from_user() helper which
> + * performs both tasks whenever possible.
> */
> int bpf_check_uarg_tail_zero(void __user *uaddr,
> size_t expected_size,
> size_t actual_size)
> {
> - unsigned char __user *addr;
> - unsigned char __user *end;
> - unsigned char val;
> + size_t size = min(expected_size, actual_size);
> + size_t rest = max(expected_size, actual_size) - size;
> int err;
>
> if (unlikely(actual_size > PAGE_SIZE)) /* silly large */
> return -E2BIG;
>
> - if (unlikely(!access_ok(uaddr, actual_size)))
> - return -EFAULT;
> -
> if (actual_size <= expected_size)
> return 0;
>
> - addr = uaddr + expected_size;
> - end = uaddr + actual_size;
> -
> - for (; addr < end; addr++) {
> - err = get_user(val, addr);
> - if (err)
> - return err;
> - if (val)
> - return -E2BIG;
> - }
> + err = check_zeroed_user(uaddr + expected_size, rest);

Just noticed this 'rest' math.
I bet compiler can optimize unnecessary min+max, but
let's save it from that job.
Just do actual_size - expected_size here instead.